Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-20 Thread Paul Mather

On May 20, 2019, at 5:09 AM, tech-lists  wrote:


On Sun, May 19, 2019 at 10:17:35PM -0500, Adam wrote:

On Sun, May 19, 2019 at 9:47 PM tech-lists  wrote:


Thanks very much to you both, all sorted now. I didn't realise there was
a 2TB limit for MBR either. Can I shrink the 4TB to 2TB on the zfs side
without scrambling the ufs on the guest?


You can snapshot the zvol to be safe, but you should be able to shrink it
to the existing partition size.  If it's a sparse zvol, it may not may  
that

much difference.


The zvol has about 515GB data. Hopefully zfs is smart enough to shrink
to the MBR boundary.



A ZVOL is just a container.  ZFS has no implicit knowledge of what you are  
using it for or whether it has any particular partition table inside it.   
It's your responsibility to size the ZVOL appropriately.  (TL;DR: ZVOLs  
have no concept of an "MBR boundary.")


Cheers,

Paul.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-20 Thread Eugene Grosbein
20.05.2019 9:14, Freddie Cash wrote:

> On Sun, May 19, 2019, 6:59 PM Paul Mather,  wrote:
> 
>> On May 19, 2019, at 9:46 PM, tech-lists  wrote:
>>
>>> Hi,
>>>
>>> context is 12-stable, zfs, bhyve
>>>
>>> I have a zvol-backed bhyve guest. Its zvol size was initially 512GB
>>> It needed to be expanded to 4TB. That worked fine.
>>>
>>> The problem is the freebsd guest is UFS and I can't seem to make it see
>>> the new size. But zfs list -o size on the host shows that as far as zfs
>> is
>>> concerned, it's 4TB
>>>
>>> On the guest, I've tried running growfs / but it says requested size is
>>> the same as the size it already is (508GB)
>>>
>>> gpart show on the guest has the following
>>>
>>> # gpart show
>>> =>63  4294967232  vtbd0  MBR  (4.0T)
>>>  63   1 - free -  (512B)
>>>  64  4294967216  1  freebsd  [active]  (2.0T)
>>> 4294967280  15 - free -  (7.5K)
>>>
>>> => 0  4294967216  vtbd0s1  BSD  (2.0T)
>>>   0  10653532161  freebsd-ufs (508G)
>>>  1065353216 83885442  freebsd-swap  (4.0G)
>>>  1073741760  3221225456   - free -  (1.5T)
>>>
>>> I'm not understanding the double output, or why growfs hasn't worked on
>>> the guest ufs. Can anyone help please?
>>
>>
>> Given the above, the freebsd-ufs partition can't grow because there is a
>> freebsd-swap partition between it and the free space you've added at the
>> end of the volume.
>>
>> You'd need to delete the swap partition (or otherwise move it to the end
>> of
>> the partition on the volume) before you could successfully growfs the
>> freebsd-ufs partition.
>>
> 
> Even if you do all that, you won't be able to use more than 2 TB anyway, as
> that's all MBR supports.
> 
> If you need more than 2 TB, you'll need to backup, repartition with GPT,
> and restore from backups.

Strictly speaking, FreeBSD is capable of using over 2TB "disk" with MBR.
And there are multiple ways to achieve that. Simplies one is to boot one time
using another root file system (mdconfig'ed image, iSCSI or just another local 
media)
and use "graid label -S" for large media to create GRAID "Promise" array with 
two SINGLE volumes.
First volume should span over boot/root partion in the MBR and then
instead of /dev/vtb0s1 it will be shown like /dev/raid/r0s1. No existing data 
will be lost
if there are two 512b blocks free at the end of media for GRAID label.

Second volume should span over rest of space and can be arbitrary large
as GRAID uses 64 bit numbers. It may be seen as /dev/raid/r1 then.

You may then just "newfs /dev/raid/r1" or put BSD label on it beforehand
or use this "device" for new ZFS pool etc.

There is also GEOM_MAP capable of similar things but it is less convenient.

But, if your boot environment supports GPT, it is easier to use GPT.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


FreeBSD CI Weekly Report 2019-05-12

2019-05-20 Thread Li-Wen Hsu
(bcc -current and -stable for more audience)

FreeBSD CI Weekly Report 2019-05-12
===

Here is a summary of the FreeBSD Continuous Integration results for
the period from 2019-05-06 to 2019-05-12.

During this period, we have:

* 2151 builds (98.2% passed, 1.8% failed) were executed on aarch64,
amd64, armv6, armv7, i386, mips, mips64, powerpc, powerpc64,
powerpcspe, riscv64, sparc64 architectures for head, stable/12,
stable/11 branches.
* 394 test runs (43.9% passed, 13.7% unstable, 42.4% exception) were
executed on amd64, i386, riscv64 architectures for head, stable/12,
stable/11 branches.
* 19 doc builds (100% passed)

(The statistics from experimental jobs are omitted)

If any of the issues found by CI are in your area of interest or
expertise please investigate the PRs listed below.

The latest web version of this report is available at
https://hackmd.io/s/SJ9KUn16V and archive is available at
http://hackfoldr.org/freebsd-ci-report/, any help is welcome.

## Failing Tests

* https://ci.freebsd.org/job/FreeBSD-head-i386-test/
i386 test is current suffering from loading ipsec(4) kernel
module, which is needed after
https://svnweb.freebsd.org/changeset/base/347410 ,  causes kernel
panic.

* https://ci.freebsd.org/job/FreeBSD-stable-12-i386-test/
* sys.netpfil.pf.forward.v6
* sys.netpfil.pf.forward.v4
* sys.netpfil.pf.set_tos.v4
* lib.libc.regex.exhaust_test.regcomp_too_big
* lib.libregex.exhaust_test.regcomp_too_big

* https://ci.freebsd.org/job/FreeBSD-stable-11-i386-test/
* local.kyua.* (31 cases)
* local.lutok.* (3 cases)

## Failing Tests (from experimental jobs)

* https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/
There are ~60 failing cases, including flakey ones, see
https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/lastCompletedBuild/testReport/
for more details

## Disabled Tests

* lib.libc.sys.mmap_test.mmap_truncate_signal
  https://bugs.freebsd.org/211924
* sys.fs.tmpfs.mount_test.large
  https://bugs.freebsd.org/212862
* sys.fs.tmpfs.link_test.kqueue
  https://bugs.freebsd.org/213662
* sys.kqueue.libkqueue.kqueue_test.main
  https://bugs.freebsd.org/233586
* usr.bin.procstat.procstat_test.command_line_arguments
  https://bugs.freebsd.org/233587
* usr.bin.procstat.procstat_test.environment
  https://bugs.freebsd.org/233588

## Open Issues

* https://bugs.freebsd.org/237077 possible race in build:
/usr/src/sys/amd64/linux/linux_support.s:38:2: error: expected
relocatable expression
* https://bugs.freebsd.org/237403 Tests in sys/opencrypto should be
converted to Python3
* https://bugs.freebsd.org/237641 Flakey test case:
common.misc.t_dtrace_contrib.tst_dynopt_d
* https://bugs.freebsd.org/237652
tests.hotspare.hotspare_test.hotspare_snapshot_001_pos timeout since
somewhere in (r346814, r 346845]
* https://bugs.freebsd.org/237655 Non-deterministic panic when running
pf tests in interface ioctl code (NULL passed to strncmp)
* https://bugs.freebsd.org/237656 "Freed UMA keg (rtentry) was not
empty (18 items). Lost 1 pages of memory." seen when running
sys/netipsec tests
* https://bugs.freebsd.org/237657
sys.kern.pdeathsig.signal_delivered_ptrace timing out periodically on
i386

### Cause build fails

* [233735: Possible build race: genoffset.o /usr/src/sys/sys/types.h:
error: machine/endian.h: No such file or
directory](https://bugs.freebsd.org/233735)
* [233769: Possible build race: ld: error: unable to find library
-lgcc_s](https://bugs.freebsd.org/233769)

### Others
[Tickets related to testing@](https://preview.tinyurl.com/y9maauwg)
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trying to expand a zvol-backed bhyve guest which is UFS

2019-05-20 Thread tech-lists

On Sun, May 19, 2019 at 10:17:35PM -0500, Adam wrote:

On Sun, May 19, 2019 at 9:47 PM tech-lists  wrote:


Thanks very much to you both, all sorted now. I didn't realise there was
a 2TB limit for MBR either. Can I shrink the 4TB to 2TB on the zfs side
without scrambling the ufs on the guest?



You can snapshot the zvol to be safe, but you should be able to shrink it
to the existing partition size.  If it's a sparse zvol, it may not may that
much difference.


The zvol has about 515GB data. Hopefully zfs is smart enough to shrink
to the MBR boundary.
--
J.


signature.asc
Description: PGP signature