>So in a ZFS boot disk configuration (rpool) in a running environment, it's
>not possible?

The example I have does grows the rpool while running from the rpool.

But you need a recent version of zfs to grow the pool while it is in use.

>On Fri, Feb 19, 2010 at 9:25 AM, <casper....@sun.com> wrote:
>
>>
>>
>> >Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC
>> label
>> >without losing data as the OS is built on this volume?
>>
>>
>> Sure as long as the new partition starts on the same block and is longer.
>>
>> It was a bit more difficult with UFS but for zfs it is very simple.
>>
>> I had a few systems with two ufs root slices using live upgrade:
>>
>>        <slice 1><slice 2><swap>
>>
>> First I booted from <slice 2>
>> ludelete "slice1"
>> zpool create rpool "slice1"
>> lucreate -p rpool
>> luactivate slice1
>> init 6
>> from the zfs root:
>> ludelete slice2
>> format:
>>         remove slice2;
>>         grow slice1 to incorporate slice2
>>         label
>>
>> At that time I needed to reboot to get the new device size reflected in
>> zpool list; today that is no longer needed
>>
>> Casper
>>
>>
>
>--Boundary_(ID_oehH7aQu3QEaJqsmuxeYyA)
>Content-type: text/html; charset=ISO-8859-1
>Content-transfer-encoding: QUOTED-PRINTABLE
>
>So in a ZFS boot disk configuration (rpool) in a running environment,=
> it&#39;s not possible?<br><br><div class=3D"gmail_quote">On Fri, Feb=
> 19, 2010 at 9:25 AM,  <span dir=3D"ltr">&lt;<a href=3D"mailto:Casper=
>....@sun.com">casper....@sun.com</a>&gt;</span> wrote:<br>
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-l=
>eft:1px #ccc solid;padding-left:1ex;"><div class=3D"im"><br>
><br>
>&gt;Is it possible to grow a ZFS volume on a SPARC system with a SMI/=
>VTOC label<br>
>&gt;without losing data as the OS is built on this volume?<br>
><br>
><br>
></div>Sure as long as the new partition starts on the same block and =
>is longer.<br>
><br>
>It was a bit more difficult with UFS but for zfs it is very simple.<b=
>r>
><br>
>I had a few systems with two ufs root slices using live upgrade:<br>
><br>
> =A0 =A0 =A0 =A0&lt;slice 1&gt;&lt;slice 2&gt;&lt;swap&gt;<br>
><br>
>First I booted from &lt;slice 2&gt;<br>
>ludelete &quot;slice1&quot;<br>
>zpool create rpool &quot;slice1&quot;<br>
>lucreate -p rpool<br>
>luactivate slice1<br>
>init 6<br>
>=66rom the zfs root:<br>
>ludelete slice2<br>
>format:<br>
> =A0 =A0 =A0 =A0 remove slice2;<br>
> =A0 =A0 =A0 =A0 grow slice1 to incorporate slice2<br>
> =A0 =A0 =A0 =A0 label<br>
><br>
>At that time I needed to reboot to get the new device size reflected =
>in<br>
>zpool list; today that is no longer needed<br>
><br>
>Casper<br>
><br>
></blockquote></div><br>
>
>--Boundary_(ID_oehH7aQu3QEaJqsmuxeYyA)--


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to