Hi,
today i had a strange crash of ALL nodes in parallel while running the
4.1 kernel.
I was rebooting one node and all other nodes crashed with the following
output:
dlm: closing connection to node 5
BUG: unable to handle kernel NULL pointer dereference at 08e
As there were no more
What about setting CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS = N? As I understand
it, "not set" means "use the default setting the kernel devs think is the
most sane for normal use", which in this case would be Y...
On Wed, Sep 9, 2015, 06:58 Alexandre DERUMIER wrote:
> >>I could
Am 09.09.2015 um 14:50 schrieb Martin Maurer:
>> Stefan Priebe - Profihost AG hat am 9. September 2015
>> um 14:12 geschrieben:
>>
>>
>> Hi,
>>
>> today i had a strange crash of ALL nodes in parallel while running the
>> 4.1 kernel.
>
> Not an answer to your question but
s/PAST/PATCH/
Sorry but shouldn't have an effect.
On 09/09/2015 03:21 PM, Thomas Lamprecht wrote:
Signed-off-by: Thomas Lamprecht
---
www/manager/ha/ResourceEdit.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager/ha/ResourceEdit.js
Signed-off-by: Thomas Lamprecht
---
www/manager/ha/ResourceEdit.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager/ha/ResourceEdit.js b/www/manager/ha/ResourceEdit.js
index 962cdda..bc48c70 100644
--- a/www/manager/ha/ResourceEdit.js
+++
Hi,
In my benchmarks, THP increase the memory transaction speed by 5-10% in the
KVM guest if THP is activated on the host side, not in the VM side. I know
that THP for databases is bad, but we are not using THP in the guest where
the database lives. Can someone provide some benchmarks for that
>>I could set it in kernel config, for example:
>>
>>CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
>>
>>would that help?
Well, this is like "echo madvise > /sys/kernel/mm/transparent_hugepage/enabled"
But I don't see how to set never.
maybe
# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
#
>>Thanks - but any reason? I mean 4.1 is a long term kernel greg kohan
>>will support until 2017.
As proxmox use now ubuntu kernel (for lxc features), and current ubuntu is not
lts,
ubuntu don't care to run on LTS kernel.
I think we need wait for the next lts ubuntu to have a lts kernel.
This fixes bugzilla entry: https://bugzilla.proxmox.com/show_bug.cgi?id=707
---
src/PVE/API2/LXC/Status.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
index 761a3c8..5e5e116 100644
--- a/src/PVE/API2/LXC/Status.pm
+++
Hi...
I am testing PVE 4 with drbd and I remove a volume but is not delete any
way from the DRBD layer. Instead I get this message:
drbdmanage list-volumes
++
| Name | Vol ID |
>>Do you really want 'never'? I thought 'madvise' would be better, because we
>>can
>>enable it
>>for some application (KVM)?
AFAIK, even jemalloc for example are not perfect with madvise , they are still
latency impact.
They are a lot of bug report with transparent hudgepage with software
> I am testing PVE 4 with drbd and I remove a volume but is not delete any
AFAIK this is already fixed with newest drbdmanage packages.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Do you mean, from Proxmox repos?? I tried but no new version available...
2015-09-09 11:59 GMT-03:00 Dietmar Maurer :
>
> > I am testing PVE 4 with drbd and I remove a volume but is not delete any
>
> AFAIK this is already fixed with newest drbdmanage packages.
>
>
--
>>In my benchmarks, THP increase the memory transaction speed by 5-10% in the
>>KVM guest if THP is activated on the host side, not in the VM side. I know
>>that THP for databases is bad, but we are not using THP in the guest >>where
>>the database lives. Can someone provide some benchmarks for
that looks strange to me - is that really required?
The current GUI works for me, so how can I reproduce the bug?
> On September 9, 2015 at 3:21 PM Thomas Lamprecht
> wrote:
>
>
> Signed-off-by: Thomas Lamprecht
> ---
>
>
> This fixes bugzilla entry: https://bugzilla.proxmox.com/show_bug.cgi?id=707
>
applied, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> I was rebooting one node and all other nodes crashed with the following
> output:
> dlm: closing connection to node 5
May I ask why do you use dlm at all?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Didn't replied to the list, sorry.
Weitergeleitete Nachricht
Betreff: Re: [pve-devel] [PAST pve-manager] use correct checked value
for ha resource edit dialog
Datum: Wed, 9 Sep 2015 18:10:50 +0200
Von:Thomas Lamprecht
An: Dietmar Maurer
here an interesting recent lklm discussion about THP
http://www.spinics.net/lists/linux-mm/msg84776.html
- Mail original -
De: "aderumier"
À: "Andreas Steinel"
Cc: "pve-devel"
Envoyé: Mercredi 9 Septembre 2015
> >>Thanks - but any reason? I mean 4.1 is a long term kernel greg kohan
> >>will support until 2017.
>
> As proxmox use now ubuntu kernel (for lxc features), and current ubuntu is not
> lts,
>
> ubuntu don't care to run on LTS kernel.
>
> I think we need wait for the next lts ubuntu to have a
changelog : rename graphiteserver option to generic server option
I'll add documentation in wiki for both graphite && influxdb stats plugin,
when this commit will be apply.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
/etc/pve/status.cfg
---
influxdb:
server influxdb3.odiso.net
port 8089
This require influxdb >= 0.9 with udp enabled
influxdb.conf
-
[[udp]]
enabled = true
bind-address = "0.0.0.0:8089"
database = "proxmox"
batch-size = 1000
batch-timeout = "1s"
Nice... I'll wait for it...
Thanks
2015-09-09 14:46 GMT-03:00 Dietmar Maurer :
>
> > Do you mean, from Proxmox repos?? I tried but no new version available...
>
> we will upload new packages tomorrow ...
>
> >
> > 2015-09-09 11:59 GMT-03:00 Dietmar Maurer
> Reproduction:
> Disable an service and then try to edit it. You cannot enable it
> directly as the checkbox is already checked.
> So you need to save it as disabled (although it is already disabled)
> reopen the edit box and only then you can enable it.
I cannot reproduce that.
>
> Am
> Do you mean, from Proxmox repos?? I tried but no new version available...
we will upload new packages tomorrow ...
>
> 2015-09-09 11:59 GMT-03:00 Dietmar Maurer :
>
> >
> > > I am testing PVE 4 with drbd and I remove a volume but is not delete any
> >
> > AFAIK this is
> this is PVE 3.4 on wheezy with the current 4.1 kernel. I've not changed
> explicitly anything regarding dlm. Isn't it the default on 3.4?
Well, it is confusing if you talk about kernel 4.1 without mentioning
that you use Proxmox 3.4.
AFAIK nobody else use that combination, so nobody else
Am 09.09.2015 um 19:59 schrieb Dietmar Maurer:
this is PVE 3.4 on wheezy with the current 4.1 kernel. I've not changed
explicitly anything regarding dlm. Isn't it the default on 3.4?
Well, it is confusing if you talk about kernel 4.1 without mentioning
that you use Proxmox 3.4.
AFAIK nobody
Am 09.09.2015 um 19:42 schrieb Dietmar Maurer:
Reproduction:
Disable an service and then try to edit it. You cannot enable it
directly as the checkbox is already checked.
So you need to save it as disabled (although it is already disabled)
reopen the edit box and only then you can enable it.
I tried with a simple 128 MB Jessie Box and 'mbw' program and got these results:
mbw -n 100 50 | grep AVG > hthp_1_gthp_1
HostTransparentHugePages_(1|0)_GuestTransparentHugePages_(1|0)
Test hthp_0_gthp_0
AVG Method: MEMCPY Elapsed: 0.01728MiB: 50.0 Copy:
2893.123 MiB/s
AVG
Do you have tested it with a real workload and big memory ? (100-200GB for
example).
I'm curious to see what happen with a lot of alloc/desalloc and fragmentation.
- Mail original -
De: "Andreas Steinel"
À: "aderumier"
Cc: "Daniel Hunsaker"
Am 09.09.2015 um 17:32 schrieb Dietmar Maurer:
I was rebooting one node and all other nodes crashed with the following
output:
dlm: closing connection to node 5
May I ask why do you use dlm at all?
Hi,
this is PVE 3.4 on wheezy with the current 4.1 kernel. I've not changed
explicitly
31 matches
Mail list logo