ZVOL backed bhyve VMs running on UFS or ZFS - Best practices, pros and cons, performance

2019-06-24 Thread Kai Gallasch
Hi.

I am planning to convert several 11.2 FreeBSD Jails into bhyve VMs.

My bhyve setup uses a ZVOL for each VM (currently Debian and Ubuntu
Linux using ext3/4) and I want to keep it that way for the new FreeBSD VMs.

I really like the idea of having ZFS as file system inside the FreeBSD
guests, but wonder if there are any strong points against running ZFS on
top of ZFS.

- Is the I/O performance of a FreeBSD guest with ZFS much worse compared
to a UFS only guest or only e.g. 1/3 slower. Any numbers?

- Wasted RAM allocation: Do FreeBSD guests with ZFS make any sense for
small memory allocations < 2 GB? (Because ZFS caches will possibly
consume most of the memory)

- I/O bloat? Will you waste performance on the physical disks on the
host server when running most of your bhyve guests with ZFS?

- What ZFS send/receive performance can be expected in and out of a ZFS
guest running on top of a ZVOL? (Is there an I/O penalty?)

- Are there any ZFS or ZPOOL parameters inside the ZFS guest that should
be tuned to reflect the situation that they are virtualized on a ZVOL?
(maybe to save precious RAM)

- Zpool Scrubbing inside the VM should not be necessary, right?

- Can the file system journaling of a ZVOL hosted FreeBSD guest running
UFS cope with a crash of the host server? Will the UFS journaling keep
the file guest system integrity even if an older ZVOL snapshot of the VM
will be rolled back (VM not running; Auto-Snapshot from the VM in state
"running")



Regards,
Kai.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Linux lockups inside bhyve VM on FreeBSD 11.1

2017-12-02 Thread Kai Gallasch


Am 01.12.2017 um 03:41 schrieb Jason Tubnor:
> 
> On 1 December 2017 at 08:29, Kai Gallasch <k...@free.de <mailto:k...@free.de>>
> wrote:
> 
> Hello.
> 
> Last day an apache 2.4 running inside a Debian9 linux bhyve VM locked up
> on one of my servers (FreeBSD 11.1-RELENG, GENERIC kernel) overloading
> the VM.
> 
> The VM uses a ZFS zvol blockdevice on top of a zpool, consisting of two
> mirrored SSDs.
> 
> I was able to enter the VM through the bhyve console, kill and restart
> the stuck apache process and regain stability inside the VM.
> 
> I found below output in the Linux dmesg and suspect the ext4 journaling
> to be the culprit.
> 
> Has anyone experienced similar lockups running Linux inside a bhyve VM?
> At the time when this happened there was no high I/O on the VM zpool.
> 
> 
> Have you set vfs.zfs.arc_max to a lower value to allow for bhyve head
> room?  How was the host system swap, did the host start to eat into it?
> 
> I run a few guests with Ubuntu 16.04 but mainly use XFS for areas that
> aren't system related and haven't come across this issue.

Hello Jason.

My bhyve host server has 96GB RAM and all Linux VMs together are
allocated 20GB. I now have set vfs.zfs.arc_max to 64G to see if other
lockups occur.

K.

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Linux lockups inside bhyve VM on FreeBSD 11.1

2017-11-30 Thread Kai Gallasch
Hello.

Last day an apache 2.4 running inside a Debian9 linux bhyve VM locked up
on one of my servers (FreeBSD 11.1-RELENG, GENERIC kernel) overloading
the VM.

The VM uses a ZFS zvol blockdevice on top of a zpool, consisting of two
mirrored SSDs.

I was able to enter the VM through the bhyve console, kill and restart
the stuck apache process and regain stability inside the VM.

I found below output in the Linux dmesg and suspect the ext4 journaling
to be the culprit.

Has anyone experienced similar lockups running Linux inside a bhyve VM?
At the time when this happened there was no high I/O on the VM zpool.

Cheers,
K.


[1594985.015199] INFO: task jbd2/vda1-8:161 blocked for more than 120
seconds.
[1594985.015841]   Not tainted 4.9.0-4-amd64 #1 Debian 4.9.51-1
[1594985.016375] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[1594985.017074] jbd2/vda1-8 D0   161  2 0x
[1594985.017078]  885477ec5400  88547620e280
88547fc18240
[1594985.017080]  8e00e500 b056c0957ca0 8da038e3
8854765bd088
[1594985.017081]  0246 88547fc18240 b056c0957d80
88547620e280
[1594985.017082] Call Trace:
[1594985.017116]  [] ? __schedule+0x233/0x6d0
[1594985.017131]  [] ? prepare_to_wait_event+0xf0/0xf0
[1594985.017132]  [] ? schedule+0x32/0x80
[1594985.017165]  [] ?
jbd2_journal_commit_transaction+0x25f/0x17a0 [jbd2]
[1594985.017171]  [] ? update_curr+0xe1/0x160
[1594985.017172]  [] ? account_entity_dequeue+0xa4/0xc0
[1594985.017173]  [] ? prepare_to_wait_event+0xf0/0xf0
[1594985.017176]  [] ? kjournald2+0xc2/0x260 [jbd2]
[1594985.017177]  [] ? prepare_to_wait_event+0xf0/0xf0
[1594985.017180]  [] ? commit_timeout+0x10/0x10 [jbd2]
[1594985.017186]  [] ? do_group_exit+0x3a/0xa0
[1594985.017191]  [] ? kthread+0xd7/0xf0
[1594985.017192]  [] ? kthread_park+0x60/0x60
[1594985.017198]  [] ? ret_from_fork+0x25/0x30
[1594985.017202] INFO: task rs:main Q:Reg:407 blocked for more than 120
seconds.
[1594985.017841]   Not tainted 4.9.0-4-amd64 #1 Debian 4.9.51-1
[1594985.018373] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[1594985.019116] rs:main Q:Reg   D0   407  1 0x
[1594985.019118]  885476928000  885479283140
88547fc18240
[1594985.019119]  8e00e500 b056c0c1fb48 8da038e3
2406f83c84b44b0f
[1594985.019121]  00ff8853c8784380 88547fc18240 b056c0c1fb68
885479283140
[1594985.019122] Call Trace:
[1594985.019124]  [] ? __schedule+0x233/0x6d0
[1594985.019125]  [] ? schedule+0x32/0x80
[1594985.019129]  [] ?
wait_transaction_locked+0x86/0xc0 [jbd2]
[1594985.019130]  [] ?
prepare_to_wait_event+0xf[1599459.680158] serial8250: too much work for irq4
0/0xf0
[1594985.019132]  [] ?
add_transaction_credits+0x1b8/0x290 [jbd2]
[1594985.019142]  [] ? __switch_to+0x2c1/0x6d0
[1594985.019145]  [] ? start_this_handle+0x105/0x400
[jbd2]
[1594985.019146]  [] ? __schedule+0x23b/0x6d0
[1594985.019147]  [] ? check_preempt_wakeup+0x103/0x210
[1594985.019150]  [] ? jbd2__journal_start+0xd9/0x1e0
[jbd2]
[1594985.019238]  [] ? ext4_dirty_inode+0x2d/0x60 [ext4]
[1594985.019253]  [] ? __mark_inode_dirty+0x165/0x350
[1594985.019258]  [] ? generic_update_time+0x79/0xd0
[1594985.019259]  [] ? current_time+0x36/0x70
[1594985.019260]  [] ? file_update_time+0xbc/0x110
[1594985.019271]  [] ?
__generic_file_write_iter+0x99/0x1b0
[1594985.019278]  [] ? ext4_file_write_iter+0x90/0x370
[ext4]
[1594985.019288]  [] ? do_futex+0x2c9/0xb00
[1594985.019294]  [] ? fsnotify+0x381/0x4e0
[1594985.019299]  [] ? new_sync_write+0xda/0x130
[1594985.019305]  [] ? vfs_write+0xb0/0x190
[1594985.019307]  [] ? SyS_write+0x52/0xc0
[1594985.019309]  [] ?
system_call_fast_compare_end+0xc/0x9b
[1594985.019344] INFO: task kworker/u8:2:19882 blocked for more than 120
seconds.
[1594985.019985]   Not tainted 4.9.0-4-amd64 #1 Debian 4.9.51-1
[1594985.020512] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[1594985.021215] kworker/u8:2D0 19882  2 0x
[1594985.021220] Workqueue: writeback wb_workfn (flush-254:0)

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: MAC addresses to use for BHyve VM's running under FreeBSD?

2014-02-05 Thread Kai Gallasch
Am 05.02.2014 um 08:03 schrieb Craig Rodrigues:
 Hi,
 
 I am running many BHyve VM's and am using tap interfaces
 with a single bridge.  I am configuring the IP addresses
 of these VM's via DHCP.
 
 I need to have separate MAC addresses for each VM.
 
 Can anyone recommend a range of MAC addresses to use?
 
 I seem to recall that at the 2013 FreeBSD Vendor Summit in
 Sunnyvale, California, that George mentioned that
 there might be a Organizational Unique Identifier (OUI) for the FreeBSD
 project that we can use for BHyve VM's.  Is that right?
 
 If not, can people recommend a range of addresses to use?

http://standards.ieee.org/develop/regauth/oui/public.html

Using Search the Public MA-L Listing with search term FreeBSD reveals..

--- snip ---

Here are the results of your search through the public section of the IEEE 
Standards OUI database report for freebsd:

  58-9C-FC   (hex)  FreeBSD
 Foundation
  589CFC (base 16)  
FreeBSD
 Foundation
P.O. Box 20247
Boulder CO  80308-3247
UNITED STATES
--- snap ---


Regards,
K.

--
GPG-Key: A593 E38B E968 4DBE 14D6  2115 7065 4D7C 4FB1 F588
Key available from hkps://hkps.pool.sks-keyservers.net


PGP.sig
Description: Signierter Teil der Nachricht


Re: Virtualbox: time sync

2013-05-28 Thread Kai Gallasch
Am 26.05.2013 um 07:03 schrieb Marc Fournier:
 
 First, thank you for the answer about switching from straight bridge to using 
 a tap device … that made a *huge* difference … I'm not able to actually run a 
 build world within the environment without having the whole machine lock up …

Hi Marc.

Would you share your knowledge of using TAP(4) as an alternative to the generic 
vbox bridge mode?
I cannot find much reference to that, especially when it comes to running vbox 
on FreeBSD.

If you have serveral vbox guests bridged to a physical --bridgeadapter or to 
a tap device used as a bridge adapter - is there a way to lock down / enforce 
an ip address for a vbox guest? I don't like the idea that someone inside a 
vbox guests uses ip addresses for the guest, that are already in use on the 
vbox server or uses non-allowed IPs.

One possible solution I thought is to make the switch that the vbox server 
hardware is plugged into, lock down the mac-adress of a vbox tap-device to a 
fixed ip-adresses (if the switch is capable of that)

Kai.
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org