Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-20 Thread Paul Mather

On May 20, 2019, at 5:09 AM, tech-lists  wrote:


On Sun, May 19, 2019 at 10:17:35PM -0500, Adam wrote:

On Sun, May 19, 2019 at 9:47 PM tech-lists  wrote:


Thanks very much to you both, all sorted now. I didn't realise there was
a 2TB limit for MBR either. Can I shrink the 4TB to 2TB on the zfs side
without scrambling the ufs on the guest?


You can snapshot the zvol to be safe, but you should be able to shrink it
to the existing partition size.  If it's a sparse zvol, it may not may  
that

much difference.


The zvol has about 515GB data. Hopefully zfs is smart enough to shrink
to the MBR boundary.



A ZVOL is just a container.  ZFS has no implicit knowledge of what you are  
using it for or whether it has any particular partition table inside it.   
It's your responsibility to size the ZVOL appropriately.  (TL;DR: ZVOLs  
have no concept of an "MBR boundary.")


Cheers,

Paul.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-20 Thread Eugene Grosbein
20.05.2019 9:14, Freddie Cash wrote:

> On Sun, May 19, 2019, 6:59 PM Paul Mather,  wrote:
> 
>> On May 19, 2019, at 9:46 PM, tech-lists  wrote:
>>
>>> Hi,
>>>
>>> context is 12-stable, zfs, bhyve
>>>
>>> I have a zvol-backed bhyve guest. Its zvol size was initially 512GB
>>> It needed to be expanded to 4TB. That worked fine.
>>>
>>> The problem is the freebsd guest is UFS and I can't seem to make it see
>>> the new size. But zfs list -o size on the host shows that as far as zfs
>> is
>>> concerned, it's 4TB
>>>
>>> On the guest, I've tried running growfs / but it says requested size is
>>> the same as the size it already is (508GB)
>>>
>>> gpart show on the guest has the following
>>>
>>> # gpart show
>>> =>63  4294967232  vtbd0  MBR  (4.0T)
>>>  63   1 - free -  (512B)
>>>  64  4294967216  1  freebsd  [active]  (2.0T)
>>> 4294967280  15 - free -  (7.5K)
>>>
>>> => 0  4294967216  vtbd0s1  BSD  (2.0T)
>>>   0  10653532161  freebsd-ufs (508G)
>>>  1065353216 83885442  freebsd-swap  (4.0G)
>>>  1073741760  3221225456   - free -  (1.5T)
>>>
>>> I'm not understanding the double output, or why growfs hasn't worked on
>>> the guest ufs. Can anyone help please?
>>
>>
>> Given the above, the freebsd-ufs partition can't grow because there is a
>> freebsd-swap partition between it and the free space you've added at the
>> end of the volume.
>>
>> You'd need to delete the swap partition (or otherwise move it to the end
>> of
>> the partition on the volume) before you could successfully growfs the
>> freebsd-ufs partition.
>>
> 
> Even if you do all that, you won't be able to use more than 2 TB anyway, as
> that's all MBR supports.
> 
> If you need more than 2 TB, you'll need to backup, repartition with GPT,
> and restore from backups.

Strictly speaking, FreeBSD is capable of using over 2TB "disk" with MBR.
And there are multiple ways to achieve that. Simplies one is to boot one time
using another root file system (mdconfig'ed image, iSCSI or just another local 
media)
and use "graid label -S" for large media to create GRAID "Promise" array with 
two SINGLE volumes.
First volume should span over boot/root partion in the MBR and then
instead of /dev/vtb0s1 it will be shown like /dev/raid/r0s1. No existing data 
will be lost
if there are two 512b blocks free at the end of media for GRAID label.

Second volume should span over rest of space and can be arbitrary large
as GRAID uses 64 bit numbers. It may be seen as /dev/raid/r1 then.

You may then just "newfs /dev/raid/r1" or put BSD label on it beforehand
or use this "device" for new ZFS pool etc.

There is also GEOM_MAP capable of similar things but it is less convenient.

But, if your boot environment supports GPT, it is easier to use GPT.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-20 Thread tech-lists

On Sun, May 19, 2019 at 10:17:35PM -0500, Adam wrote:

On Sun, May 19, 2019 at 9:47 PM tech-lists  wrote:


Thanks very much to you both, all sorted now. I didn't realise there was
a 2TB limit for MBR either. Can I shrink the 4TB to 2TB on the zfs side
without scrambling the ufs on the guest?



You can snapshot the zvol to be safe, but you should be able to shrink it
to the existing partition size.  If it's a sparse zvol, it may not may that
much difference.


The zvol has about 515GB data. Hopefully zfs is smart enough to shrink
to the MBR boundary.
--
J.


signature.asc
Description: PGP signature


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-19 Thread Adam
On Sun, May 19, 2019 at 9:47 PM tech-lists  wrote:

> Thanks very much to you both, all sorted now. I didn't realise there was
> a 2TB limit for MBR either. Can I shrink the 4TB to 2TB on the zfs side
> without scrambling the ufs on the guest?
>

You can snapshot the zvol to be safe, but you should be able to shrink it
to the existing partition size.  If it's a sparse zvol, it may not may that
much difference.
-- 
Adam
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-19 Thread tech-lists

Thanks very much to you both, all sorted now. I didn't realise there was
a 2TB limit for MBR either. Can I shrink the 4TB to 2TB on the zfs side
without scrambling the ufs on the guest?

thanks,
--
J.


signature.asc
Description: PGP signature


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-19 Thread Freddie Cash
On Sun, May 19, 2019, 6:59 PM Paul Mather,  wrote:

> On May 19, 2019, at 9:46 PM, tech-lists  wrote:
>
> > Hi,
> >
> > context is 12-stable, zfs, bhyve
> >
> > I have a zvol-backed bhyve guest. Its zvol size was initially 512GB
> > It needed to be expanded to 4TB. That worked fine.
> >
> > The problem is the freebsd guest is UFS and I can't seem to make it see
> > the new size. But zfs list -o size on the host shows that as far as zfs
> is
> > concerned, it's 4TB
> >
> > On the guest, I've tried running growfs / but it says requested size is
> > the same as the size it already is (508GB)
> >
> > gpart show on the guest has the following
> >
> > # gpart show
> > =>63  4294967232  vtbd0  MBR  (4.0T)
> >  63   1 - free -  (512B)
> >  64  4294967216  1  freebsd  [active]  (2.0T)
> > 4294967280  15 - free -  (7.5K)
> >
> > => 0  4294967216  vtbd0s1  BSD  (2.0T)
> >   0  10653532161  freebsd-ufs (508G)
> >  1065353216 83885442  freebsd-swap  (4.0G)
> >  1073741760  3221225456   - free -  (1.5T)
> >
> > I'm not understanding the double output, or why growfs hasn't worked on
> > the guest ufs. Can anyone help please?
>
>
> Given the above, the freebsd-ufs partition can't grow because there is a
> freebsd-swap partition between it and the free space you've added at the
> end of the volume.
>
> You'd need to delete the swap partition (or otherwise move it to the end
> of
> the partition on the volume) before you could successfully growfs the
> freebsd-ufs partition.
>

Even if you do all that, you won't be able to use more than 2 TB anyway, as
that's all MBR supports.

If you need more than 2 TB, you'll need to backup, repartition with GPT,
and restore from backups.


Cheers,
Freddie

Typos due to smartphone keyboard.

>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-19 Thread Paul Mather

On May 19, 2019, at 9:46 PM, tech-lists  wrote:


Hi,

context is 12-stable, zfs, bhyve

I have a zvol-backed bhyve guest. Its zvol size was initially 512GB
It needed to be expanded to 4TB. That worked fine.

The problem is the freebsd guest is UFS and I can't seem to make it see
the new size. But zfs list -o size on the host shows that as far as zfs is
concerned, it's 4TB

On the guest, I've tried running growfs / but it says requested size is
the same as the size it already is (508GB)

gpart show on the guest has the following

# gpart show
=>63  4294967232  vtbd0  MBR  (4.0T)
 63   1 - free -  (512B)
 64  4294967216  1  freebsd  [active]  (2.0T)
4294967280  15 - free -  (7.5K)

=> 0  4294967216  vtbd0s1  BSD  (2.0T)
  0  10653532161  freebsd-ufs (508G)
 1065353216 83885442  freebsd-swap  (4.0G)
 1073741760  3221225456   - free -  (1.5T)

I'm not understanding the double output, or why growfs hasn't worked on
the guest ufs. Can anyone help please?



Given the above, the freebsd-ufs partition can't grow because there is a  
freebsd-swap partition between it and the free space you've added at the  
end of the volume.


You'd need to delete the swap partition (or otherwise move it to the end of  
the partition on the volume) before you could successfully growfs the  
freebsd-ufs partition.


Cheers,

Paul.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"