Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread Allan Jude
On 2017-12-02 20:21, K. Macy wrote:
> On Sat, Dec 2, 2017 at 5:16 PM, Allan Jude  wrote:
>> On 12/02/2017 00:23, Dustin Wenz wrote:
>>> I have noticed significant storage amplification for my zvols; that could 
>>> very well be the reason. I would like to know more about why it happens.
>>>
>>> Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead 
>>> (and maybe an extra 1k or so worth of checksums for each 128k block in the 
>>> vm), but how do you get a 10X expansion in stored data?
>>>
>>> What is the recommended zvol block size for a FreeBSD/ZFS guest? Perhaps 
>>> 4k, to match the most common mass storage sector size?
>>>
>>> - .Dustin
>>>
 On Dec 1, 2017, at 9:18 PM, K. Macy  wrote:

 One thing to watch out for with chyves if your virtual disk is more
 than 20G is the fact that it uses 512 byte blocks for the zvols it
 creates. I ended up using up 1.4TB only half filling up a 250G zvol.
 Chyves is quick and easy, but it's not exactly production ready.

 -M



> On Thu, Nov 30, 2017 at 3:15 PM, Dustin Wenz  
> wrote:
> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is 
> also FreeBSD 11.1). Their sole purpose is to house some medium-sized 
> Postgres databases (100-200GB). The host system has 64GB of real memory 
> and 112GB of swap. I have configured each guest to only use 16GB of 
> memory, yet while doing my initial database imports in the VMs, bhyve 
> will quickly grow to use all available system memory and then be killed 
> by the kernel:
>
>kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 
> 4096, error 12
>kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 
> 4096, error 12
>kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 
> 4096, error 12
>kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
>
> The OOM condition seems related to doing moderate IO within the VM, 
> though nothing within the VM itself shows high memory usage. This is the 
> chyves config for one of them:
>
>bargs  -A -H -P -S
>bhyve_disk_typevirtio-blk
>bhyve_net_type virtio-net
>bhyveload_flags
>chyves_guest_version   0300
>cpu4
>creation   Created on Mon Oct 23 16:17:04 CDT 2017 
> by chyves v0.2.0 2016/09/11 using __create()
>loader bhyveload
>net_ifaces tap51
>os default
>ram16G
>rcboot 0
>revert_to_snapshot
>revert_to_snapshot_method  off
>serial nmdm51
>template   no
>uuid   8495a130-b837-11e7-b092-0025909a8b56
>
>
> I've also tried using different bhyve_disk_types, with no improvement. 
> How is it that bhyve can use far more memory that I'm specifying?
>
>- .Dustin
>>> ___
>>> freebsd-virtualization@freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>>> To unsubscribe, send any mail to 
>>> "freebsd-virtualization-unsubscr...@freebsd.org"
>>>
>>
>> Storage amplification usually has to do with ZFS RAID-Z padding. If your
>> ZVOL block size does not make sense with your disk sector size, and
>> RAID-Z level, you can get pretty silly numbers.
> 
> That's not what I'm talking about here. If your volblocksize is too
> small you end up using (vastly) more space for indirect blocks than
> data blocks.
> 
> -M
> 

In addition, if you have say, 4k sectors, and a RAID-Z2, it means every
allocation of 4k or less, requires 12k of disk space.

Allocations of 8k are worse in this case, since all allocations must be
in units of 1+p, where p is the parity level. So allocating 8kb of space
(2x 4k sectors), plus 2x 4k parity sectors = 4 sectors, Rounded up the
to the next multiple of 3 is 6.

That means 8k of data took: 8kb for data + 8kb for parity + 8kb for
padding = 24kb of space.

If you were using RAID-Z1, it would have been just 12kb (8kb data, 4kb
parity, 0kb padding)

Or if you used 16kb record size on the zvol:
4 sectors data, 2 sectors parity = 6, which is a multiple of 3, so no
padding required.

-- 
Allan Jude
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread Jason Tubnor
On 3 Dec. 2017 12:21, "K. Macy"  wrote:

>
> Storage amplification usually has to do with ZFS RAID-Z padding. If your
> ZVOL block size does not make sense with your disk sector size, and
> RAID-Z level, you can get pretty silly numbers.

That's not what I'm talking about here. If your volblocksize is too
small you end up using (vastly) more space for indirect blocks than
data blocks.

-M


This. I experienced this with chyves and bumped the block size back to 8k.
Fixed the issue.

Be careful selecting 8k for Windows guests though. Anything over 4k and you
can't use any mssql.

With some tweaks, I've found vm-bhyve better suited if you can do totally
UEFI. I'll be putting up some templates shortly.

Cheers,

Jason
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread K. Macy
On Sat, Dec 2, 2017 at 5:16 PM, Allan Jude  wrote:
> On 12/02/2017 00:23, Dustin Wenz wrote:
>> I have noticed significant storage amplification for my zvols; that could 
>> very well be the reason. I would like to know more about why it happens.
>>
>> Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead 
>> (and maybe an extra 1k or so worth of checksums for each 128k block in the 
>> vm), but how do you get a 10X expansion in stored data?
>>
>> What is the recommended zvol block size for a FreeBSD/ZFS guest? Perhaps 4k, 
>> to match the most common mass storage sector size?
>>
>> - .Dustin
>>
>>> On Dec 1, 2017, at 9:18 PM, K. Macy  wrote:
>>>
>>> One thing to watch out for with chyves if your virtual disk is more
>>> than 20G is the fact that it uses 512 byte blocks for the zvols it
>>> creates. I ended up using up 1.4TB only half filling up a 250G zvol.
>>> Chyves is quick and easy, but it's not exactly production ready.
>>>
>>> -M
>>>
>>>
>>>
 On Thu, Nov 30, 2017 at 3:15 PM, Dustin Wenz  
 wrote:
 I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is 
 also FreeBSD 11.1). Their sole purpose is to house some medium-sized 
 Postgres databases (100-200GB). The host system has 64GB of real memory 
 and 112GB of swap. I have configured each guest to only use 16GB of 
 memory, yet while doing my initial database imports in the VMs, bhyve will 
 quickly grow to use all available system memory and then be killed by the 
 kernel:

kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 
 4096, error 12
kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 
 4096, error 12
kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 
 4096, error 12
kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space

 The OOM condition seems related to doing moderate IO within the VM, though 
 nothing within the VM itself shows high memory usage. This is the chyves 
 config for one of them:

bargs  -A -H -P -S
bhyve_disk_typevirtio-blk
bhyve_net_type virtio-net
bhyveload_flags
chyves_guest_version   0300
cpu4
creation   Created on Mon Oct 23 16:17:04 CDT 2017 
 by chyves v0.2.0 2016/09/11 using __create()
loader bhyveload
net_ifaces tap51
os default
ram16G
rcboot 0
revert_to_snapshot
revert_to_snapshot_method  off
serial nmdm51
template   no
uuid   8495a130-b837-11e7-b092-0025909a8b56


 I've also tried using different bhyve_disk_types, with no improvement. How 
 is it that bhyve can use far more memory that I'm specifying?

- .Dustin
>> ___
>> freebsd-virtualization@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> To unsubscribe, send any mail to 
>> "freebsd-virtualization-unsubscr...@freebsd.org"
>>
>
> Storage amplification usually has to do with ZFS RAID-Z padding. If your
> ZVOL block size does not make sense with your disk sector size, and
> RAID-Z level, you can get pretty silly numbers.

That's not what I'm talking about here. If your volblocksize is too
small you end up using (vastly) more space for indirect blocks than
data blocks.

-M
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread Allan Jude
On 12/02/2017 00:23, Dustin Wenz wrote:
> I have noticed significant storage amplification for my zvols; that could 
> very well be the reason. I would like to know more about why it happens. 
> 
> Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead 
> (and maybe an extra 1k or so worth of checksums for each 128k block in the 
> vm), but how do you get a 10X expansion in stored data?
> 
> What is the recommended zvol block size for a FreeBSD/ZFS guest? Perhaps 4k, 
> to match the most common mass storage sector size?
> 
> - .Dustin
> 
>> On Dec 1, 2017, at 9:18 PM, K. Macy  wrote:
>>
>> One thing to watch out for with chyves if your virtual disk is more
>> than 20G is the fact that it uses 512 byte blocks for the zvols it
>> creates. I ended up using up 1.4TB only half filling up a 250G zvol.
>> Chyves is quick and easy, but it's not exactly production ready.
>>
>> -M
>>
>>
>>
>>> On Thu, Nov 30, 2017 at 3:15 PM, Dustin Wenz  wrote:
>>> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is 
>>> also FreeBSD 11.1). Their sole purpose is to house some medium-sized 
>>> Postgres databases (100-200GB). The host system has 64GB of real memory and 
>>> 112GB of swap. I have configured each guest to only use 16GB of memory, yet 
>>> while doing my initial database imports in the VMs, bhyve will quickly grow 
>>> to use all available system memory and then be killed by the kernel:
>>>
>>>kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 
>>> 4096, error 12
>>>kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 
>>> 4096, error 12
>>>kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 
>>> 4096, error 12
>>>kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
>>>
>>> The OOM condition seems related to doing moderate IO within the VM, though 
>>> nothing within the VM itself shows high memory usage. This is the chyves 
>>> config for one of them:
>>>
>>>bargs  -A -H -P -S
>>>bhyve_disk_typevirtio-blk
>>>bhyve_net_type virtio-net
>>>bhyveload_flags
>>>chyves_guest_version   0300
>>>cpu4
>>>creation   Created on Mon Oct 23 16:17:04 CDT 2017 
>>> by chyves v0.2.0 2016/09/11 using __create()
>>>loader bhyveload
>>>net_ifaces tap51
>>>os default
>>>ram16G
>>>rcboot 0
>>>revert_to_snapshot
>>>revert_to_snapshot_method  off
>>>serial nmdm51
>>>template   no
>>>uuid   8495a130-b837-11e7-b092-0025909a8b56
>>>
>>>
>>> I've also tried using different bhyve_disk_types, with no improvement. How 
>>> is it that bhyve can use far more memory that I'm specifying?
>>>
>>>- .Dustin
> ___
> freebsd-virtualization@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to 
> "freebsd-virtualization-unsubscr...@freebsd.org"
> 

Storage amplification usually has to do with ZFS RAID-Z padding. If your
ZVOL block size does not make sense with your disk sector size, and
RAID-Z level, you can get pretty silly numbers.

-- 
Allan Jude
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread K. Macy
There was a standards group but now the interfaces used buy the Linux
virtio drivers define the de facto standard. As virtual interfaces go
they're fairly decent. So all we need is a backend.

The one thing FreeBSD doesn't have that I miss is CPU hot plug when running
as a guest - or at least a mechanism to be told to stop running on the APs.
That would make live migration much simpler.


-M



On Sat, Dec 2, 2017 at 12:03 Rodney W. Grimes <
freebsd-...@pdx.rh.cn85.dnsmgr.net> wrote:

> > On Fri, Dec 1, 2017 at 20:02 Rodney W. Grimes <
> > freebsd-...@pdx.rh.cn85.dnsmgr.net> wrote:
> >
> > > > On 02/12/2017 08:11, Dustin Wenz wrote:
> > > > >
> > > > > The commit history shows that chyves defaults to -S if you are
> > > > > hosting from FreeBSD 10.3 or later. I'm sure they had a reason for
> > > > > doing that, but I don't know what that would be. It seems to an
> > > > > inefficient use of main memory if you need to run a lot of VMs.
> > > >
> > > > It sounds like a reasonable solution to a problem. If host memory is
> > > > full it swaps some out, so a bhyve might have free mem but some
> could be
> > > > swapped out by the host. If the bhyve is out of mem, it's system
> swaps
> > > > to it's disk, so the host swaps it back in so that the bhyve can then
> > > > swap it to its disk...
> > > >
> > > > Wiring bhyve ram might be a reasonable solution as long as the hosts
> > > > physical ram isn't over allocated by bhyve guests.
> > > >
> > > > The best solution would involve a host and guest talking to each
> other
> > > > about used mem, but that would break the whole virtual machine
> illusion.
> > > > At the least it would involve a system telling the hardware what
> memory
> > > > is used and what is not, which just isn't something any system does.
> > > > Maybe that is an idea for the vm guest aware systems of the future.
> > >
> > > Its actually old technology, its called the memory balloon driver,
> > > but bhyve does not have that functionality, yet.
> > >
> > >
> >
> > The virtio ballon driver is already there. Implementing a kernel backend
> > for it would be trivial. In kernel virtio-net and virtio-p9fs backends
> are
> > already well underway.
>
> Don't you also need guest front ends for each OS?  That is the hard part
> from what I have seen of all the memory over commit stuff.  Especially when
> you get to the part of "this page of memory in this guest is the exact
> same thing as that page of memory in that guest".
>
> But if we had the backend working, and just the FreeBSD guest frontend
> it would be a big win for many of us using bhyve with quantities of
> FreeBSD guests.
>
> Are we compativle enough we can use the KVM guest balloning extensions?
>
> --
> Rod Grimes
> rgri...@freebsd.org
>
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread Rodney W. Grimes
> On Fri, Dec 1, 2017 at 20:02 Rodney W. Grimes <
> freebsd-...@pdx.rh.cn85.dnsmgr.net> wrote:
> 
> > > On 02/12/2017 08:11, Dustin Wenz wrote:
> > > >
> > > > The commit history shows that chyves defaults to -S if you are
> > > > hosting from FreeBSD 10.3 or later. I'm sure they had a reason for
> > > > doing that, but I don't know what that would be. It seems to an
> > > > inefficient use of main memory if you need to run a lot of VMs.
> > >
> > > It sounds like a reasonable solution to a problem. If host memory is
> > > full it swaps some out, so a bhyve might have free mem but some could be
> > > swapped out by the host. If the bhyve is out of mem, it's system swaps
> > > to it's disk, so the host swaps it back in so that the bhyve can then
> > > swap it to its disk...
> > >
> > > Wiring bhyve ram might be a reasonable solution as long as the hosts
> > > physical ram isn't over allocated by bhyve guests.
> > >
> > > The best solution would involve a host and guest talking to each other
> > > about used mem, but that would break the whole virtual machine illusion.
> > > At the least it would involve a system telling the hardware what memory
> > > is used and what is not, which just isn't something any system does.
> > > Maybe that is an idea for the vm guest aware systems of the future.
> >
> > Its actually old technology, its called the memory balloon driver,
> > but bhyve does not have that functionality, yet.
> >
> >
> 
> The virtio ballon driver is already there. Implementing a kernel backend
> for it would be trivial. In kernel virtio-net and virtio-p9fs backends are
> already well underway.

Don't you also need guest front ends for each OS?  That is the hard part
from what I have seen of all the memory over commit stuff.  Especially when
you get to the part of "this page of memory in this guest is the exact
same thing as that page of memory in that guest".

But if we had the backend working, and just the FreeBSD guest frontend
it would be a big win for many of us using bhyve with quantities of
FreeBSD guests.

Are we compativle enough we can use the KVM guest balloning extensions?

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Voir le message de Samia Lami et les autres notifications que vous avez manquées

2017-12-02 Thread Facebook via freebsd-virtualization

Accédez à Facebook
https://www.facebook.com/n/?aref=1512244645429565=email=55f609b517e6bG45580756G55f60e4e7813dG32b=2.1512244647.AbzZiyeOP0ek915if3Q_m=freebsd-virtualization%40freebsd.org=2nd_cta


Afficher les notifications
https://www.facebook.com/n/?notifications=1512244645429565=email=55f609b517e6bG45580756G55f60e4e7813dG32b=2.1512244647.AbzZiyeOP0ek915if3Q_m=freebsd-virtualization%40freebsd.org=1st_cta



Bonjour Najat,

Pas mal de choses se sont passées sur Facebook depuis votre dernière connexion. 
Voici quelques notifications que vous auriez pu rater.

"  2 messages
  26 nouvelles notifications"

Merci,
L’équipe Facebook




Ce message a été envoyé à freebsd-virtualization@freebsd.org. Si vous ne 
souhaitez plus recevoir ces messages de la part de Facebook, veuillez suivre le 
lien ci-dessous pour annuler votre abonnement.
https://www.facebook.com/o.php?k=AS3tJmWNWfJVslvz=1163396950=55f609b517e6bG45580756G55f60e4e7813dG32b
Facebook, Inc., Attention: Community Support, 1 Hacker Way, Menlo Park, CA 94025

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-12-02 Thread Paul Webster via freebsd-virtualization
Just as I was near one at the time, apparently ext4 is 4096 default

 sudo tune2fs -l /dev/sda

tune2fs 1.43.4 (31-Jan-2017)
Filesystem volume name:  xdock
Last mounted on:  /var/lib/docker
Filesystem UUID:  b1dd0790-970d-4596-9192-49c704337015
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent 64bit flex_bg sparse_super large_file
huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options:user_xattr acl
Filesystem state: clean
Errors behavior:  Continue
Filesystem OS type:   Linux
Inode count:  14655488
Block count:  58607766
Reserved block count: 2930388
Free blocks:  44314753
Free inodes:  13960548
First block:  0
Block size:   4096
Fragment size:4096
Group descriptor size:64
Reserved GDT blocks:  1024
Blocks per group: 32768
Fragments per group:  32768
Inodes per group: 8192
Inode blocks per group:   512
Flex block group size:16
Filesystem created:   Thu Nov  9 10:32:16 2017
Last mount time:  Wed Nov 29 17:08:30 2017
Last write time:  Wed Nov 29 17:08:30 2017
Mount count:  21
Maximum mount count:  -1
Last checked: Thu Nov  9 10:32:16 2017
Check interval:   0 ()
Lifetime writes:  147 GB
Reserved blocks uid:  0 (user root)
Reserved blocks gid:  0 (group root)
First inode:  11
Inode size:   256
Required extra isize: 32
Desired extra isize:  32
Journal inode:8
Default directory hash:   half_md4
Directory Hash Seed:  e943c6b0-9b5c-402a-a2ca-5f7dd094712d
Journal backup:   inode blocks
Checksum type:crc32c
Checksum: 0x04f644e2


On 2 December 2017 at 06:47, K. Macy  wrote:

> On Fri, Dec 1, 2017 at 9:23 PM, Dustin Wenz 
> wrote:
> > I have noticed significant storage amplification for my zvols; that could
> > very well be the reason. I would like to know more about why it happens.
> >
> > Since the volblocksize is 512 bytes, I certainly expect extra cpu
> overhead
> > (and maybe an extra 1k or so worth of checksums for each 128k block in
> the
> > vm), but how do you get a 10X expansion in stored data?
> >
> > What is the recommended zvol block size for a FreeBSD/ZFS guest? Perhaps
> 4k,
> > to match the most common mass storage sector size?
>
> I would err somewhat larger, the benefits of shallower indirect block
> chains will outweigh the cost of RMW I would guess. And I think it
> should be your guest file system block size. I don't know what ext4
> is, but ext2/3 was 16k by default IIRC.
>
> -M
>
> >
> > - .Dustin
> >
> > On Dec 1, 2017, at 9:18 PM, K. Macy  wrote:
> >
> > One thing to watch out for with chyves if your virtual disk is more
> > than 20G is the fact that it uses 512 byte blocks for the zvols it
> > creates. I ended up using up 1.4TB only half filling up a 250G zvol.
> > Chyves is quick and easy, but it's not exactly production ready.
> >
> > -M
> >
> >
> >
> > On Thu, Nov 30, 2017 at 3:15 PM, Dustin Wenz 
> wrote:
> >
> > I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is
> > also FreeBSD 11.1). Their sole purpose is to house some medium-sized
> > Postgres databases (100-200GB). The host system has 64GB of real memory
> and
> > 112GB of swap. I have configured each guest to only use 16GB of memory,
> yet
> > while doing my initial database imports in the VMs, bhyve will quickly
> grow
> > to use all available system memory and then be killed by the kernel:
> >
> >
> >kernel: swap_pager: I/O error - pageout failed; blkno 1735,size
> 4096,
> > error 12
> >
> >kernel: swap_pager: I/O error - pageout failed; blkno 1610,size
> 4096,
> > error 12
> >
> >kernel: swap_pager: I/O error - pageout failed; blkno 1763,size
> 4096,
> > error 12
> >
> >kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
> >
> >
> > The OOM condition seems related to doing moderate IO within the VM,
> though
> > nothing within the VM itself shows high memory usage. This is the chyves
> > config for one of them:
> >
> >
> >bargs  -A -H -P -S
> >
> >bhyve_disk_typevirtio-blk
> >
> >bhyve_net_type virtio-net
> >
> >bhyveload_flags
> >
> >chyves_guest_version   0300
> >
> >cpu4
> >
> >creation   Created on Mon Oct 23 16:17:04 CDT
> 2017 by
> > chyves v0.2.0 2016/09/11 using __create()
> >
> >loader bhyveload
> >
> >net_ifaces tap51
> >
> >os default
> >
> >

Re: Linux lockups inside bhyve VM on FreeBSD 11.1

2017-12-02 Thread Kai Gallasch


Am 01.12.2017 um 03:41 schrieb Jason Tubnor:
> 
> On 1 December 2017 at 08:29, Kai Gallasch >
> wrote:
> 
> Hello.
> 
> Last day an apache 2.4 running inside a Debian9 linux bhyve VM locked up
> on one of my servers (FreeBSD 11.1-RELENG, GENERIC kernel) overloading
> the VM.
> 
> The VM uses a ZFS zvol blockdevice on top of a zpool, consisting of two
> mirrored SSDs.
> 
> I was able to enter the VM through the bhyve console, kill and restart
> the stuck apache process and regain stability inside the VM.
> 
> I found below output in the Linux dmesg and suspect the ext4 journaling
> to be the culprit.
> 
> Has anyone experienced similar lockups running Linux inside a bhyve VM?
> At the time when this happened there was no high I/O on the VM zpool.
> 
> 
> Have you set vfs.zfs.arc_max to a lower value to allow for bhyve head
> room?  How was the host system swap, did the host start to eat into it?
> 
> I run a few guests with Ubuntu 16.04 but mainly use XFS for areas that
> aren't system related and haven't come across this issue.

Hello Jason.

My bhyve host server has 96GB RAM and all Linux VMs together are
allocated 20GB. I now have set vfs.zfs.arc_max to 64G to see if other
lockups occur.

K.

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"