On 11/27/2015 3:16 AM, Willem Jan Withagen wrote:
On 27-11-2015 06:59, Matthew Grooms wrote:
On 10/16/2014 3:10 AM, Edward Tomasz Napierała wrote:
On 1010T1529, Matthew Grooms wrote:
All,

I am a long time user and advocate of FreeBSD and manage a several
deployments of FreeBSD in a few data centers. Now that these
environments are almost always virtual, it would make sense that FreeBSD support for basic features such as dynamic disk resizing. It looks like most of the parts are intended to work. Kudos to the FreeBSD foundation
for seeing the need and sponsoring dynamic increase of online UFS
filesystems via growfs. Unfortunately, it would appear that there are
still problems in this area, such as ...

a) cam/geom recognizing when a drive's size has increased
b) zpool recognizing when a gpt partition size has increased

For example, if I do an install of FreeBSD 10 on VMware using ZFS, I see
the following ...

root@zpool-test:~ #  gpart show
=>      34  16777149  da0  GPT  (8.0G)
          34      1024    1  freebsd-boot  (512K)
        1058   4194304    2  freebsd-swap  (2.0G)
     4195362  12581821    3  freebsd-zfs  (6.0G)

If I increase the VM disk size using VMware to 16G and rescan using
camcontrol, this is what I see ...
"camcontrol rescan" does not force fetching the updated disk size.
AFAIK there is no way to do that.  However, this should happen
automatically, if the "other side" properly sends proper Unit Attention
after resizing.  No idea why this doesn't happen with VMWare.
Reboot obviously clears things up.

[..]

Now I want the claim the additional 14 gigs of space for my zpool ...

root@zpool-test:~ # zpool status
    pool: zroot
   state: ONLINE
    scan: none requested
config:

          NAME STATE     READ
WRITE CKSUM
          zroot ONLINE 0
0     0
            gptid/352086bd-50b5-11e4-95b8-0050569b2a04 ONLINE 0
0     0

root@zpool-test:~ # zpool set autoexpand=on zroot
root@zpool-test:~ # zpool online -e zroot
gptid/352086bd-50b5-11e4-95b8-0050569b2a04
root@zpool-test:~ # zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot  5.97G   876M  5.11G    14%  1.00x  ONLINE  -

The zpool appears to still only have 5.11G free. Lets reboot and try
again ...
Interesting. This used to work; actually either of those (autoexpand or
online -e) should do the trick.

root@zpool-test:~ # zpool set autoexpand=on zroot
root@zpool-test:~ # zpool online -e zroot
gptid/352086bd-50b5-11e4-95b8-0050569b2a04
root@zpool-test:~ # zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot  14.0G   876M  13.1G     6%  1.00x  ONLINE  -

Now I have 13.1G free. I can add this space to any of my zfs volumes and
it picks the change up immediately. So the question remains, why do I
need to reboot the OS twice to allocate new disk space to a volume?
FreeBSD is first and foremost a server operating system. Servers are
commonly deployed in data centers. Virtual environments are now
commonplace in data centers, not the exception to the rule. VMware still
has the vast majority of the private virutal environment market. I
assume that most would expect things like this to work out of the box.
Did I miss a required step or is this fixed in CURRENT?
Looks like genuine bugs (or rather, one missing feature and one bug).
Filling PRs for those might be a good idea.


All,

I know this is a very late follow up, but spent some more time looking
at this today and found some additional information that I found quite
interesting. I setup two VMs, one that acts as an iSCSI initiator (
CURRENT ) and another that acts as a target ( 10.2-RELEASE ). Both are
running under ESXi v5.5. There are two block devices on the initiator,
da1 and da2, that I used for resize testing ...

[root@iscsi-i /home/mgrooms]# camcontrol devlist
<NECVMWar VMware IDE CDR10 1.00>   at scbus1 target 0 lun 0 (cd0,pass0)
<VMware Virtual disk 1.0>          at scbus2 target 0 lun 0 (pass1,da0)
<VMware Virtual disk 1.0>          at scbus2 target 1 lun 0 (pass2,da1)
<FREEBSD CTLDISK 0001>             at scbus3 target 0 lun 0 (da2,pass3)

The da1 device is a virtual disk hanging off of a VMware virtual SAS
controller ...

[root@iscsi-i /home/mgrooms]# pciconf
...
mpt0@pci0:3:0:0:        class=0x010700 card=0x197615ad chip=0x00541000
rev=0x01 hdr=0x00
     vendor     = 'LSI Logic / Symbios Logic'
     device     = 'SAS1068 PCI-X Fusion-MPT SAS'
     class      = mass storage
     subclass   = SAS

[root@iscsi-i /home/mgrooms]# camcontrol readcap da1 -h
Device Size: 10 G, Block Length: 512 bytes

[root@iscsi-i /home/mgrooms]# gpart show da1
=>      40  20971440  da1  GPT  (10G)
         40  20971440    1  freebsd-ufs  (10G)

The da2 device is an iSCSI LUN mounted from my FreeBSD 10.2 VM running
ctld ...

[root@iscsi-i /home/mgrooms]# iscsictl
Target name                          Target portal    State
iqn.2015-01.lab.shrew:target0        iscsi-t.shrew.lab Connected: da2

[root@iscsi-i /home/mgrooms]# camcontrol readcap da2 -h
Device Size: 10 G, Block Length: 512 bytes

[root@iscsi-i /home/mgrooms]# gpart show da2
=>      40  20971440  da2  GPT  (10G)
         40        24       - free -  (12K)
         64  20971392    1  freebsd-ufs  (10G)
   20971456        24       - free -  (12K)

When I increased the size of da1 ( the VMDK ) and then re-ran
'camcontrol readcap' without a reboot, it clearly showed that the disk
size had increased. However, geom failed to recognize the additional
capacity ...

[root@iscsi-i /home/mgrooms]# camcontrol readcap da1 -h
Device Size: 16 G, Block Length: 512 bytes

[root@iscsi-i /home/mgrooms]# gpart show da1
=>      40  20971440  da1  GPT  (10G)
         40  20971440    1  freebsd-ufs  (10G)

Here is the interesting bit. I increased the size of da2 by modifying
the lun size in ctld.conf on the target and then issued a /etc/rd.d/ctld
reload. When I re-ran 'camcontrol readcap' on the initiator without a
reboot, it also showed that the disk size had increased, but this time
geom recognized the additional capacity as well ...

[root@iscsi-i /home/mgrooms]# camcontrol readcap da2 -h
Device Size: 16 G, Block Length: 512 bytes

[root@iscsi-i /home/mgrooms]# gpart show da2
=>      40  33554352  da2  GPT  (16G)
         40        24       - free -  (12K)
         64  20971392    1  freebsd-ufs  (10G)
   20971456  12582936       - free -  (6.0G)

I was then able to resize the partition and then grow the UFS
filesystem, all without rebooting the VM ...

[root@iscsi-i /home/mgrooms]# gpart resize -i 1 da2
da2p1 resized

[root@iscsi-i /home/mgrooms]# gpart show da2
=>      40  33554352  da2  GPT  (16G)
         40        24       - free -  (12K)
         64  33554304    1  freebsd-ufs  (16G)
   33554368        24       - free -  (12K)

[root@iscsi-i /home/mgrooms]# growfs da2p1
Device is mounted read-write; resizing will result in temporary write
suspension for /var/data2.
It's strongly recommended to make a backup before growing the file system.
OK to grow filesystem on /dev/da2p1, mounted on /var/data2, from 10GB to
16GB? [Yes/No] Yes
super-block backups (for fsck_ffs -b #) at:
  21798272, 23080512, 24362752, 25644992, 26927232, 28209472, 29491712,
30773952, 32056192, 33338432

[root@iscsi-i /home/mgrooms]# df -h
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/da0p3     15G    1.2G     12G     9%    /
devfs         1.0K    1.0K      0B   100%    /dev
/dev/da1p1    9.7G     32M    8.9G     0%    /var/data1
/dev/da2p1     15G     32M     14G     0%    /var/data2

It's also worth noting that the additional space was not recognized by
gpart/geom on the initiator until after the 'camcontrol readcap da2'
command was run. In other words, I'm skeptical that it was a Unit
Attention notification that made the right thing happen since it still
took manual prodding of cam to get the new disk geometry up into the
geom layer.

I remember doing this for a bhyve VM, and had the type same problem.
Getting gpart in the VM to actually pickup the new size required some
extra prodding (I like that word) or rebooting the VM.
I can remember reporting this:
    tpoic: "resampeling of a ZVOL that has been resized"
and getting a fix from Andrey V. Elsukov...

Index: head/sys/geom/part/g_part_gpt.c
===================================================================
--- head/sys/geom/part/g_part_gpt.c    (revision 282044)
+++ head/sys/geom/part/g_part_gpt.c    (working copy)
@@ -760,7 +760,7 @@ g_part_gpt_resize(struct g_part_table *basetable,
     struct g_part_gpt_entry *entry;

     if (baseentry == NULL)
-        return (EOPNOTSUPP);
+        return (g_part_gpt_recover(basetable));

     entry = (struct g_part_gpt_entry *)baseentry;
     baseentry->gpe_end = baseentry->gpe_start + gpp->gpp_size - 1;

Which went into the tree, but perhaps only in HEAD.
And that helped me getting the correct retasting of the GPART partitions.

Not sure if this snippet would help you to get around GEOM tasting the
new size.


Thanks for the response. I read the commit logs daily but I don't recall that one. However, it looks like that change went in over five months ago ( r284151 ) and I'm running these tests on a CURRENT ...

[mgrooms@iscsi-i ~]$ uname -a
FreeBSD iscsi-i.shrew.lab 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r291085: Thu Nov 19 21:48:13 UTC 2015 r...@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64

Something else is amiss here and I don't have the chops to fix it myself. I tried compiling a custom kernel with CAMDEBUG and twiddling with the camcontrol debug options to compare the output during the VMDK vs iSCSI disk resize process. Nothing was obvious to my untrained eye. I'd be more than happy to test patches or provide additional information to anyone willing to help.

Thanks,

-Matthew
_______________________________________________
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to