Changelog:
- hugepages option is a enum (any | 2 | 1024)
any: we try to use 1GB pages if host support it and memory is multiple of 1GB
if not we use 2MB pages
2: we force 2MB hugepages
1024: we force 1GB hugepages (in this case memory need to be multiple of 1GB)
- dynamic
Signed-off-by: Dominik Csapak
---
qm.adoc | 41 +
1 file changed, 41 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 9af2eb0..86400e3 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -71,6 +71,47 @@ lock manually (e.g., after a power
Signed-off-by: Dominik Csapak
---
qm.adoc | 16
1 file changed, 16 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index fdb2552..9af2eb0 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -31,6 +31,22 @@ create and destroy virtual machines, and control execution
>>Is it worth to use 1GB pages?
for big vm memory, yes (database of 100-200GB).
I have seen 20% improvements
Also (but I need to verify), dpdk doc said that it need 1GB hugepages.
(I think it's work with 2MB, but performance is very degraded)
I have almost finish to rework my patch, I'll try
otherwise, long kvm commands lead to systemd unit files with
very long lines, with confuses the systemd unit file parser.
apparently systemd has a length limit for unit file lines and
(line-)breaks the description string at that point. since
the rest of the description is probably not a valid
> As I don't have enough contigous memory, all 1GB hugepages can't be allocated.
>
>
> Of course, with 2MB, it's a lot more easier.
Which leads to the next question:
Is it worth to use 1GB pages?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
by sorting the VM/CT IDs and the VM/CT config keys before
iterating over them.
---
src/PVE/Firewall.pm | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 4c1586a..d8e820d 100644
--- a/src/PVE/Firewall.pm
+++
>>In my experience, it always works, but you might end up with a unresponsive
>>system for minutes/hours/days, because it swaps out the memory (if you have
>>enough), which is in the way. I killed at least two machines with this.
>>Therefore, I always configure hugepages and do a reboot. This
On Fri, May 13, 2016 at 07:52:05AM +0200, Stefan Priebe - Profihost AG wrote:
> Does upstream know about this?
Just added: https://bugzilla.kernel.org/show_bug.cgi?id=118191
(I meant to do that yesterday already but got distracted...)
> > Am 12.05.2016 um 12:51 schrieb Wolfgang Bumiller
On Fri, May 13, 2016 at 8:32 AM, Alexandre DERUMIER
wrote:
> >>Also, allocation 1GB pages may can fail due to fragmentation, while
> >>allocation of 2MB pages still work?
>
> yes.
> But this is a little bit more complex to manage.
>
In my experience, it always works, but
>>Does upstream know about this?
I don't have seen any report.
Maybe could we try to contact paolo bonzini from redhat.
- Mail original -
De: "Stefan Priebe"
À: "pve-devel"
Envoyé: Vendredi 13 Mai 2016 07:52:05
Objet: Re: [pve-devel]
>>Yes, that sounds reasonable to me. It would be great if we can get that
>>working
>>without the need to configure something (grub.conf, fstab, ...)
Ok, I'll try to improve my patch in this direction .
- Mail original -
De: "dietmar"
À: "aderumier"
>>Also, allocation 1GB pages may can fail due to fragmentation, while
>>allocation of 2MB pages still work?
yes.
But this is a little bit more complex to manage.
In this case we need to try to alloc pages before generate the qemu command line
as we generate qemu command line, with 1GB or 2M
13 matches
Mail list logo