Hi Matt,
Is there a reason why the resident memory used by a bhyve guest is quite different
when comparing ps/top & bhyvectl?
Does bhyvectl take in account something in kernel space that top/ps doesn't see?
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
root 12670 0.1 1.4 2120984 951764 1 SC+ 22Jan20 157:29.22 bhyve: smtp-a
(bhyve)
# bhyvectl --get-stats --vm=smtp-a | grep Res
Resident memory 1439875072
1.4G vs 925M
They could both be wrong, but not by a huge amount :(
"top" shows the memory used by the bhyve process, which includes usage
by the hypervisor (which admittedly should be very small). However, the
guest usage is pages that have been faulted into the address space of
the bhyve process as a result of accesses by the hypervisor (e.g.
filling in network/disk buffers for i/o).
The bhyvectl stats are the pages that have been faulted into the EPT
paging structures as a result of guest access (code/data access). This
will generally be more than that reported by top, since the guest will
be getting execution faults that the bhyve process won't.
Over time they should slowly converge.
A fully accurate count would have to walk the bhyve/guest portion of
the address space and the EPT paging structures, to avoid
double-counting. There are probably better ways to do it: certainly an
interesting design topic.
I have a guest with 2G memory allocated, and dmesg lists 2048MB real memory. The
real & avail figures are also quite close which resembles the output I
generally expect on real hardware.
Hypervisor: Origin = "bhyve bhyve "
real memory = 2147483648 (2048 MB)
avail memory = 2043318272 (1948 MB)
However, I have a guest with 5G allocated, and get the following in dmesg -
Hypervisor: Origin = "bhyve bhyve "
real memory = 6442450944 (6144 MB)
avail memory = 5141663744 (4903 MB)
bhyveload -m 5G ...
bhyve -c 2 -m 5G -AHP ...
I haven't tested where it seems to change. My only theory is that it could
possibly be something to do with crossing the old 32bit limit?
bhyve inserts a 1G region just below 4G for 32-bit PCI address space.
The 'real memory' printed by FreeBSD is just the highest physical
address (5G guest total is 3G below 4G, plus 2G above, giving 6G as the
highest phys addr).
later,
Peter.
_______________________________________________
[email protected] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to
"[email protected]"