On 2017-12-02 20:21, K. Macy wrote:
> On Sat, Dec 2, 2017 at 5:16 PM, Allan Jude wrote:
>> On 12/02/2017 00:23, Dustin Wenz wrote:
>>> I have noticed significant storage amplification for my zvols; that could
>>> very well be the reason. I would like to know more about why it happens.
>>>
>>> Sin
On 3 Dec. 2017 12:21, "K. Macy" wrote:
>
> Storage amplification usually has to do with ZFS RAID-Z padding. If your
> ZVOL block size does not make sense with your disk sector size, and
> RAID-Z level, you can get pretty silly numbers.
That's not what I'm talking about here. If your volblocksize
On Sat, Dec 2, 2017 at 5:16 PM, Allan Jude wrote:
> On 12/02/2017 00:23, Dustin Wenz wrote:
>> I have noticed significant storage amplification for my zvols; that could
>> very well be the reason. I would like to know more about why it happens.
>>
>> Since the volblocksize is 512 bytes, I certain
On 12/02/2017 00:23, Dustin Wenz wrote:
> I have noticed significant storage amplification for my zvols; that could
> very well be the reason. I would like to know more about why it happens.
>
> Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead
> (and maybe an extra 1k
There was a standards group but now the interfaces used buy the Linux
virtio drivers define the de facto standard. As virtual interfaces go
they're fairly decent. So all we need is a backend.
The one thing FreeBSD doesn't have that I miss is CPU hot plug when running
as a guest - or at least a mec
> On Fri, Dec 1, 2017 at 20:02 Rodney W. Grimes <
> freebsd-...@pdx.rh.cn85.dnsmgr.net> wrote:
>
> > > On 02/12/2017 08:11, Dustin Wenz wrote:
> > > >
> > > > The commit history shows that chyves defaults to -S if you are
> > > > hosting from FreeBSD 10.3 or later. I'm sure they had a reason for
>
Just as I was near one at the time, apparently ext4 is 4096 default
sudo tune2fs -l /dev/sda
tune2fs 1.43.4 (31-Jan-2017)
Filesystem volume name: xdock
Last mounted on: /var/lib/docker
Filesystem UUID: b1dd0790-970d-4596-9192-49c704337015
Filesystem magic number: 0xEF53
Files
On Fri, Dec 1, 2017 at 9:23 PM, Dustin Wenz wrote:
> I have noticed significant storage amplification for my zvols; that could
> very well be the reason. I would like to know more about why it happens.
>
> Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead
> (and maybe an e
I have noticed significant storage amplification for my zvols; that could very
well be the reason. I would like to know more about why it happens.
Since the volblocksize is 512 bytes, I certainly expect extra cpu overhead (and
maybe an extra 1k or so worth of checksums for each 128k block in th
On Fri, Dec 1, 2017 at 20:02 Rodney W. Grimes <
freebsd-...@pdx.rh.cn85.dnsmgr.net> wrote:
> > On 02/12/2017 08:11, Dustin Wenz wrote:
> > >
> > > The commit history shows that chyves defaults to -S if you are
> > > hosting from FreeBSD 10.3 or later. I'm sure they had a reason for
> > > doing tha
> On 02/12/2017 08:11, Dustin Wenz wrote:
> >
> > The commit history shows that chyves defaults to -S if you are
> > hosting from FreeBSD 10.3 or later. I'm sure they had a reason for
> > doing that, but I don't know what that would be. It seems to an
> > inefficient use of main memory if you nee
One thing to watch out for with chyves if your virtual disk is more
than 20G is the fact that it uses 512 byte blocks for the zvols it
creates. I ended up using up 1.4TB only half filling up a 250G zvol.
Chyves is quick and easy, but it's not exactly production ready.
-M
On Thu, Nov 30, 2017 at
On 02/12/2017 08:11, Dustin Wenz wrote:
>
> The commit history shows that chyves defaults to -S if you are
> hosting from FreeBSD 10.3 or later. I'm sure they had a reason for
> doing that, but I don't know what that would be. It seems to an
> inefficient use of main memory if you need to run a l
I've been running a database stress test on my VMs for the last few hours
without issue, and I've noticed no unexpected memory usage. Prior to changing
the wired option, this would never have run as long. I haven't limited the ARC
size yet, but I probably will since it sounds like best practice
Yep, and that's also why bhyve is getting killed instead of paging out. For
some inexplicable reason, chyves defaulted to setting -S on new VMs. That has
the effect of wiring in the max amount of memory for each guest at startup.
I changed the bargs option to "-A -H -P" instead of "-A -H -P -S".
bargs -A -H -P -S
The -S flag to bhyve wires guest memory so it won't be swapped out.
later,
Peter.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To
On 12/01/2017 12:42, Dustin Wenz wrote:
> Here's the top -uS output from a test this morning:
>
> last pid: 57375; load averages: 8.29, 7.02, 4.05
>
>
Here's the top -uS output from a test this morning:
last pid: 57375; load averages: 8.29, 7.02, 4.05
2017-12-01 17:53 GMT+08:00 Shane Ambler :
> On 01/12/2017 13:43, Allan Jude wrote:
> > On 2017-11-30 22:10, Dustin Wenz wrote:
> >> I am using a zvol as the storage for the VM, and I do not have any ARC
> >> limits set. However, the bhyve process itself ends up grabbing the vast
> >> majority of m
On 01/12/2017 13:43, Allan Jude wrote:
> On 2017-11-30 22:10, Dustin Wenz wrote:
>> I am using a zvol as the storage for the VM, and I do not have any ARC
>> limits set. However, the bhyve process itself ends up grabbing the vast
>> majority of memory.Â
>>
>> I’ll run a test tomorrow to get the
Hi Dustin, All,
01.12.2017 02:15, Dustin Wenz пишет:
> bhyve will quickly grow to use all available system memory
I'd say that some logs/stats/values should help here.
JFYI: I've just got a success importing of an earth base
OSM/nominatim/postgis at bhyve guest (CentOS-7.3, 14 CPU,
36GB RAM, 1T
On 2017-11-30 22:10, Dustin Wenz wrote:
> I am using a zvol as the storage for the VM, and I do not have any ARC
> limits set. However, the bhyve process itself ends up grabbing the vast
> majority of memory.
>
> I’ll run a test tomorrow to get the exact output from top.
>
> - .Dustin
>
> On
I am using a zvol as the storage for the VM, and I do not have any ARC limits
set. However, the bhyve process itself ends up grabbing the vast majority of
memory.
I’ll run a test tomorrow to get the exact output from top.
- .Dustin
> On Nov 30, 2017, at 5:28 PM, Allan Jude wrote:
>
>> On
On 11/30/2017 18:15, Dustin Wenz wrote:
> I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is
> also FreeBSD 11.1). Their sole purpose is to house some medium-sized Postgres
> databases (100-200GB). The host system has 64GB of real memory and 112GB of
> swap. I have configu
I'm using chyves on FreeBSD 11.1 RELEASE to manage a few VMs (guest OS is also
FreeBSD 11.1). Their sole purpose is to house some medium-sized Postgres
databases (100-200GB). The host system has 64GB of real memory and 112GB of
swap. I have configured each guest to only use 16GB of memory, yet w
25 matches
Mail list logo