Re: bhyve uses all available memory during IO-intensive operations

2017-11-30 Thread Allan Jude
On 2017-11-30 22:10, Dustin Wenz wrote:
> I am using a zvol as the storage for the VM, and I do not have any ARC
> limits set. However, the bhyve process itself ends up grabbing the vast
> majority of memory. 
> 
> I’ll run a test tomorrow to get the exact output from top.
> 
>    - .Dustin
> 
> On Nov 30, 2017, at 5:28 PM, Allan Jude  > wrote:
> 
>> On 11/30/2017 18:15, Dustin Wenz wrote:
>>> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest
>>> OS is also FreeBSD 11.1). Their sole purpose is to house some
>>> medium-sized Postgres databases (100-200GB). The host system has 64GB
>>> of real memory and 112GB of swap. I have configured each guest to
>>> only use 16GB of memory, yet while doing my initial database imports
>>> in the VMs, bhyve will quickly grow to use all available system
>>> memory and then be killed by the kernel:
>>>
>>>    kernel: swap_pager: I/O error - pageout failed; blkno 1735,size
>>> 4096, error 12
>>>    kernel: swap_pager: I/O error - pageout failed; blkno 1610,size
>>> 4096, error 12
>>>    kernel: swap_pager: I/O error - pageout failed; blkno 1763,size
>>> 4096, error 12
>>>    kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
>>>
>>> The OOM condition seems related to doing moderate IO within the VM,
>>> though nothing within the VM itself shows high memory usage. This is
>>> the chyves config for one of them:
>>>
>>>    bargs  -A -H -P -S
>>>    bhyve_disk_type    virtio-blk
>>>    bhyve_net_type virtio-net
>>>    bhyveload_flags
>>>    chyves_guest_version   0300
>>>    cpu    4
>>>    creation   Created on Mon Oct 23 16:17:04 CDT 2017
>>> by chyves v0.2.0 2016/09/11 using __create()
>>>    loader bhyveload
>>>    net_ifaces tap51
>>>    os default
>>>    ram    16G
>>>    rcboot 0
>>>    revert_to_snapshot
>>>    revert_to_snapshot_method  off
>>>    serial nmdm51
>>>    template   no
>>>    uuid   8495a130-b837-11e7-b092-0025909a8b56
>>>
>>>
>>> I've also tried using different bhyve_disk_types, with no
>>> improvement. How is it that bhyve can use far more memory that I'm
>>> specifying?
>>>
>>>    - .Dustin
>>>
>>
>> Can you show 'top' output. What makes you think bhyve is using the
>> memory? Are you using ZFS? Have you limited the vfs.zfs.arc_max to leave
>> some free RAM for the bhyve instances?
>>
>> -- 
>> Allan Jude
>> ___
>> freebsd-virtualization@freebsd.org
>>  mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> To unsubscribe, send any mail to
>> "freebsd-virtualization-unsubscr...@freebsd.org
>> "

The default limit for the ARC is 99% of your ram, so you are definitely
going to want to reduce that to something like: 90% of your 64GB of ram,
less the total amount of RAM you have given to all VMs.

-- 
Allan Jude
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-11-30 Thread Dustin Wenz
I am using a zvol as the storage for the VM, and I do not have any ARC limits 
set. However, the bhyve process itself ends up grabbing the vast majority of 
memory. 

I’ll run a test tomorrow to get the exact output from top.

   - .Dustin

> On Nov 30, 2017, at 5:28 PM, Allan Jude  wrote:
> 
>> On 11/30/2017 18:15, Dustin Wenz wrote:
>> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is 
>> also FreeBSD 11.1). Their sole purpose is to house some medium-sized 
>> Postgres databases (100-200GB). The host system has 64GB of real memory and 
>> 112GB of swap. I have configured each guest to only use 16GB of memory, yet 
>> while doing my initial database imports in the VMs, bhyve will quickly grow 
>> to use all available system memory and then be killed by the kernel:
>> 
>>kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 4096, 
>> error 12
>>kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 4096, 
>> error 12
>>kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 4096, 
>> error 12
>>kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
>> 
>> The OOM condition seems related to doing moderate IO within the VM, though 
>> nothing within the VM itself shows high memory usage. This is the chyves 
>> config for one of them:
>> 
>>bargs  -A -H -P -S
>>bhyve_disk_typevirtio-blk
>>bhyve_net_type virtio-net
>>bhyveload_flags
>>chyves_guest_version   0300
>>cpu4
>>creation   Created on Mon Oct 23 16:17:04 CDT 2017 by 
>> chyves v0.2.0 2016/09/11 using __create()
>>loader bhyveload
>>net_ifaces tap51
>>os default
>>ram16G
>>rcboot 0
>>revert_to_snapshot
>>revert_to_snapshot_method  off
>>serial nmdm51
>>template   no
>>uuid   8495a130-b837-11e7-b092-0025909a8b56
>> 
>> 
>> I've also tried using different bhyve_disk_types, with no improvement. How 
>> is it that bhyve can use far more memory that I'm specifying?
>> 
>>- .Dustin
>> 
> 
> Can you show 'top' output. What makes you think bhyve is using the
> memory? Are you using ZFS? Have you limited the vfs.zfs.arc_max to leave
> some free RAM for the bhyve instances?
> 
> -- 
> Allan Jude
> ___
> freebsd-virtualization@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to 
> "freebsd-virtualization-unsubscr...@freebsd.org"
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Linux lockups inside bhyve VM on FreeBSD 11.1

2017-11-30 Thread Jason Tubnor
On 1 December 2017 at 08:29, Kai Gallasch  wrote:

> Hello.
>
> Last day an apache 2.4 running inside a Debian9 linux bhyve VM locked up
> on one of my servers (FreeBSD 11.1-RELENG, GENERIC kernel) overloading
> the VM.
>
> The VM uses a ZFS zvol blockdevice on top of a zpool, consisting of two
> mirrored SSDs.
>
> I was able to enter the VM through the bhyve console, kill and restart
> the stuck apache process and regain stability inside the VM.
>
> I found below output in the Linux dmesg and suspect the ext4 journaling
> to be the culprit.
>
> Has anyone experienced similar lockups running Linux inside a bhyve VM?
> At the time when this happened there was no high I/O on the VM zpool.


Have you set vfs.zfs.arc_max to a lower value to allow for bhyve head
room?  How was the host system swap, did the host start to eat into it?

I run a few guests with Ubuntu 16.04 but mainly use XFS for areas that
aren't system related and haven't come across this issue.

Cheers,

Jason.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve uses all available memory during IO-intensive operations

2017-11-30 Thread Allan Jude
On 11/30/2017 18:15, Dustin Wenz wrote:
> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is 
> also FreeBSD 11.1). Their sole purpose is to house some medium-sized Postgres 
> databases (100-200GB). The host system has 64GB of real memory and 112GB of 
> swap. I have configured each guest to only use 16GB of memory, yet while 
> doing my initial database imports in the VMs, bhyve will quickly grow to use 
> all available system memory and then be killed by the kernel:
> 
>   kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 4096, 
> error 12
>   kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 4096, 
> error 12
>   kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 4096, 
> error 12
>   kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space
> 
> The OOM condition seems related to doing moderate IO within the VM, though 
> nothing within the VM itself shows high memory usage. This is the chyves 
> config for one of them:
> 
>   bargs  -A -H -P -S
>   bhyve_disk_typevirtio-blk
>   bhyve_net_type virtio-net
>   bhyveload_flags
>   chyves_guest_version   0300
>   cpu4
>   creation   Created on Mon Oct 23 16:17:04 CDT 2017 by 
> chyves v0.2.0 2016/09/11 using __create()
>   loader bhyveload
>   net_ifaces tap51
>   os default
>   ram16G
>   rcboot 0
>   revert_to_snapshot
>   revert_to_snapshot_method  off
>   serial nmdm51
>   template   no
>   uuid   8495a130-b837-11e7-b092-0025909a8b56
> 
> 
> I've also tried using different bhyve_disk_types, with no improvement. How is 
> it that bhyve can use far more memory that I'm specifying?
> 
>   - .Dustin
> 

Can you show 'top' output. What makes you think bhyve is using the
memory? Are you using ZFS? Have you limited the vfs.zfs.arc_max to leave
some free RAM for the bhyve instances?

-- 
Allan Jude
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


bhyve uses all available memory during IO-intensive operations

2017-11-30 Thread Dustin Wenz
I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is also 
FreeBSD 11.1). Their sole purpose is to house some medium-sized Postgres 
databases (100-200GB). The host system has 64GB of real memory and 112GB of 
swap. I have configured each guest to only use 16GB of memory, yet while doing 
my initial database imports in the VMs, bhyve will quickly grow to use all 
available system memory and then be killed by the kernel:

kernel: swap_pager: I/O error - pageout failed; blkno 1735,size 4096, 
error 12
kernel: swap_pager: I/O error - pageout failed; blkno 1610,size 4096, 
error 12
kernel: swap_pager: I/O error - pageout failed; blkno 1763,size 4096, 
error 12
kernel: pid 41123 (bhyve), uid 0, was killed: out of swap space

The OOM condition seems related to doing moderate IO within the VM, though 
nothing within the VM itself shows high memory usage. This is the chyves config 
for one of them:

bargs  -A -H -P -S
bhyve_disk_typevirtio-blk
bhyve_net_type virtio-net
bhyveload_flags
chyves_guest_version   0300
cpu4
creation   Created on Mon Oct 23 16:17:04 CDT 2017 by 
chyves v0.2.0 2016/09/11 using __create()
loader bhyveload
net_ifaces tap51
os default
ram16G
rcboot 0
revert_to_snapshot
revert_to_snapshot_method  off
serial nmdm51
template   no
uuid   8495a130-b837-11e7-b092-0025909a8b56


I've also tried using different bhyve_disk_types, with no improvement. How is 
it that bhyve can use far more memory that I'm specifying?

- .Dustin

smime.p7s
Description: S/MIME cryptographic signature


Linux lockups inside bhyve VM on FreeBSD 11.1

2017-11-30 Thread Kai Gallasch
Hello.

Last day an apache 2.4 running inside a Debian9 linux bhyve VM locked up
on one of my servers (FreeBSD 11.1-RELENG, GENERIC kernel) overloading
the VM.

The VM uses a ZFS zvol blockdevice on top of a zpool, consisting of two
mirrored SSDs.

I was able to enter the VM through the bhyve console, kill and restart
the stuck apache process and regain stability inside the VM.

I found below output in the Linux dmesg and suspect the ext4 journaling
to be the culprit.

Has anyone experienced similar lockups running Linux inside a bhyve VM?
At the time when this happened there was no high I/O on the VM zpool.

Cheers,
K.


[1594985.015199] INFO: task jbd2/vda1-8:161 blocked for more than 120
seconds.
[1594985.015841]   Not tainted 4.9.0-4-amd64 #1 Debian 4.9.51-1
[1594985.016375] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[1594985.017074] jbd2/vda1-8 D0   161  2 0x
[1594985.017078]  885477ec5400  88547620e280
88547fc18240
[1594985.017080]  8e00e500 b056c0957ca0 8da038e3
8854765bd088
[1594985.017081]  0246 88547fc18240 b056c0957d80
88547620e280
[1594985.017082] Call Trace:
[1594985.017116]  [] ? __schedule+0x233/0x6d0
[1594985.017131]  [] ? prepare_to_wait_event+0xf0/0xf0
[1594985.017132]  [] ? schedule+0x32/0x80
[1594985.017165]  [] ?
jbd2_journal_commit_transaction+0x25f/0x17a0 [jbd2]
[1594985.017171]  [] ? update_curr+0xe1/0x160
[1594985.017172]  [] ? account_entity_dequeue+0xa4/0xc0
[1594985.017173]  [] ? prepare_to_wait_event+0xf0/0xf0
[1594985.017176]  [] ? kjournald2+0xc2/0x260 [jbd2]
[1594985.017177]  [] ? prepare_to_wait_event+0xf0/0xf0
[1594985.017180]  [] ? commit_timeout+0x10/0x10 [jbd2]
[1594985.017186]  [] ? do_group_exit+0x3a/0xa0
[1594985.017191]  [] ? kthread+0xd7/0xf0
[1594985.017192]  [] ? kthread_park+0x60/0x60
[1594985.017198]  [] ? ret_from_fork+0x25/0x30
[1594985.017202] INFO: task rs:main Q:Reg:407 blocked for more than 120
seconds.
[1594985.017841]   Not tainted 4.9.0-4-amd64 #1 Debian 4.9.51-1
[1594985.018373] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[1594985.019116] rs:main Q:Reg   D0   407  1 0x
[1594985.019118]  885476928000  885479283140
88547fc18240
[1594985.019119]  8e00e500 b056c0c1fb48 8da038e3
2406f83c84b44b0f
[1594985.019121]  00ff8853c8784380 88547fc18240 b056c0c1fb68
885479283140
[1594985.019122] Call Trace:
[1594985.019124]  [] ? __schedule+0x233/0x6d0
[1594985.019125]  [] ? schedule+0x32/0x80
[1594985.019129]  [] ?
wait_transaction_locked+0x86/0xc0 [jbd2]
[1594985.019130]  [] ?
prepare_to_wait_event+0xf[1599459.680158] serial8250: too much work for irq4
0/0xf0
[1594985.019132]  [] ?
add_transaction_credits+0x1b8/0x290 [jbd2]
[1594985.019142]  [] ? __switch_to+0x2c1/0x6d0
[1594985.019145]  [] ? start_this_handle+0x105/0x400
[jbd2]
[1594985.019146]  [] ? __schedule+0x23b/0x6d0
[1594985.019147]  [] ? check_preempt_wakeup+0x103/0x210
[1594985.019150]  [] ? jbd2__journal_start+0xd9/0x1e0
[jbd2]
[1594985.019238]  [] ? ext4_dirty_inode+0x2d/0x60 [ext4]
[1594985.019253]  [] ? __mark_inode_dirty+0x165/0x350
[1594985.019258]  [] ? generic_update_time+0x79/0xd0
[1594985.019259]  [] ? current_time+0x36/0x70
[1594985.019260]  [] ? file_update_time+0xbc/0x110
[1594985.019271]  [] ?
__generic_file_write_iter+0x99/0x1b0
[1594985.019278]  [] ? ext4_file_write_iter+0x90/0x370
[ext4]
[1594985.019288]  [] ? do_futex+0x2c9/0xb00
[1594985.019294]  [] ? fsnotify+0x381/0x4e0
[1594985.019299]  [] ? new_sync_write+0xda/0x130
[1594985.019305]  [] ? vfs_write+0xb0/0x190
[1594985.019307]  [] ? SyS_write+0x52/0xc0
[1594985.019309]  [] ?
system_call_fast_compare_end+0xc/0x9b
[1594985.019344] INFO: task kworker/u8:2:19882 blocked for more than 120
seconds.
[1594985.019985]   Not tainted 4.9.0-4-amd64 #1 Debian 4.9.51-1
[1594985.020512] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[1594985.021215] kworker/u8:2D0 19882  2 0x
[1594985.021220] Workqueue: writeback wb_workfn (flush-254:0)

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Najat, revenez sur Facebook en un clic

2017-11-30 Thread Facebook via freebsd-virtualization
Bonjour Najat,

Bonjour Najat,Il semble que vous ayez des difficultés à vous connecter à 
Facebook. Cliquez sur le bouton ci-dessous et nous vous connecterons. Revenir 
sur Facebook Si vous n’avez pas tenté de vous connecter, dites-le nous.

Merci,
L’équipe Facebook




Ce message a été envoyé à freebsd-virtualization@freebsd.org sur votre demande.
Facebook, Inc., Attention: Community Support, 1 Hacker Way, Menlo Park, CA 94025

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Nous sommes heureux de vous revoir sur Facebook

2017-11-30 Thread Facebook via freebsd-virtualization
Bonjour Najat,

Facebook est un bon moyen de garder le contact avec vos amis, surtout si vous 
ne les avez pas vus depuis quelque temps.

Facebook est un bon moyen de garder le contact avec vos amis, surtout si vous 
ne les avez pas vus depuis quelque temps.Retrouvez vos amis à l’aide de l’outil 
automatique de recherche d’amis de Facebook :Retrouver des amis

Retrouvez vos amis à l’aide de l’outil automatique de recherche d’amis de 
Facebook :

Merci,
L’équipe Facebook




Ce message a été envoyé à freebsd-virtualization@freebsd.org. Si vous ne 
souhaitez plus recevoir ces messages de la part de Facebook, veuillez suivre le 
lien ci-dessous pour annuler votre abonnement.
https://www.facebook.com/o.php?k=AS2nxm8EX3GwQe87=1163396950=55f390ce17b6aG45580756G0G87
Facebook, Inc., Attention: Community Support, 1 Hacker Way, Menlo Park, CA 94025

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Voir le message de Samia Lami et les autres notifications que vous avez manquées

2017-11-30 Thread Facebook via freebsd-virtualization

Accédez à Facebook
https://www.facebook.com/n/?aref=1512071259324394=email=55f383cb37d18G45580756G55f3886497feaG32b=2.1512071260.AbzrDG_4sAJcUxTVlWY_m=freebsd-virtualization%40freebsd.org=2nd_cta


Afficher les notifications
https://www.facebook.com/n/?notifications=1512071259324394=email=55f383cb37d18G45580756G55f3886497feaG32b=2.1512071260.AbzrDG_4sAJcUxTVlWY_m=freebsd-virtualization%40freebsd.org=1st_cta



Bonjour Najat,

Pas mal de choses se sont passées sur Facebook depuis votre dernière connexion. 
Voici quelques notifications que vous auriez pu rater.

"  2 messages
  9 invitations d’ajout
  26 nouvelles notifications"

Merci,
L’équipe Facebook




Ce message a été envoyé à freebsd-virtualization@freebsd.org. Si vous ne 
souhaitez plus recevoir ces messages de la part de Facebook, veuillez suivre le 
lien ci-dessous pour annuler votre abonnement.
https://www.facebook.com/o.php?k=AS1GRSsE5W4kX35x=1163396950=55f383cb37d18G45580756G55f3886497feaG32b
Facebook, Inc., Attention: Community Support, 1 Hacker Way, Menlo Park, CA 94025

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"