Hi,
Am 15.01.19 um 08:19 schrieb Stefan Priebe - Profihost AG:
>
> Am 15.01.19 um 08:15 schrieb Fabian Grünbichler:
>> On Mon, Jan 14, 2019 at 11:04:29AM +0100, Stefan Priebe - Profihost AG wrote:
>>> Hello,
>>>
>>> today i was wondering about so
Am 15.01.19 um 08:15 schrieb Fabian Grünbichler:
> On Mon, Jan 14, 2019 at 11:04:29AM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> today i was wondering about some disk images while the vm was deleted.
>>
>> Inspecting the task history i found this
Hello,
today i was wondering about some disk images while the vm was deleted.
Inspecting the task history i found this log:
trying to acquire cfs lock 'storage-cephstoroffice' ...
trying to acquire cfs lock 'storage-cephstoroffice' ...
trying to acquire cfs lock 'storage-cephstoroffice' ...
Hi Alexandre,
Am 08.01.19 um 21:55 schrieb Alexandre DERUMIER:
>>> But, file_set_contents - which save_clusterfw_conf uses - does this
>>> already[0],
>>> so maybe this is the "high-level fuse rename isn't atomic" bug again...
>>> May need to take a closer look tomorrow.
>
> mmm, ok.
>
> In
Hello,
since upgrading to PVE 5 i'm seeing the following error on a regular
basis - while stress testing migration (doing 100-200 migrations in a row):
2018-10-23 13:54:42 migration speed: 384.00 MB/s - downtime 113 ms
2018-10-23 13:54:42 migration status: completed
2018-10-23 13:54:42 ERROR:
Am 12.09.2018 um 10:19 schrieb Wolfgang Bumiller:
> On Wed, Sep 12, 2018 at 08:29:02AM +0200, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> i don't know whether this is a known bug but since Proxmox V5 i have
>> seen the following error message several times wh
Hello,
i don't know whether this is a known bug but since Proxmox V5 i have
seen the following error message several times while trying to start a
vm after shutdown:
ERROR: start failed:
org.freedesktop.systemd1.UnitExists: Unit 240.scope already exists
I had a similiar problem under our ubuntu
Am 04.09.2018 um 10:10 schrieb Thomas Lamprecht:
> On 9/4/18 10:00 AM, Dominik Csapak wrote:
>> On 09/04/2018 09:24 AM, Stefan Priebe - Profihost AG wrote:
>>> I searched the PVE docs but didn't find anything.
>> [snip]
>> also in the qm manpage under 'Me
Hello list,
can anybody enlighten me where the difference between:
balloon: 0 and balloon not defined at all in the config file is?
I searched the PVE docs but didn't find anything. It seems balloon: 0 is
set if ballooning is disabled. And if the balloon value is identical to
the memory value
Am 28.08.2018 um 10:47 schrieb Thomas Lamprecht:
> On 8/27/18 7:50 PM, Stefan Priebe - Profihost AG wrote:
>> I'm using them as a default since 2 weeks. No problems so far.
>>
>
> for the backend this is probably OK.
>
> The GUI part isn't as easy to make sane.
>
I'm using them as a default since 2 weeks. No problems so far.
Greets,
Stefan
Am 27.08.2018 um 18:01 schrieb Alexandre DERUMIER:
> any comments to add theses cpu flags ?
>
>
> - Mail original -
> De: "aderumier"
> À: "pve-devel"
> Envoyé: Lundi 20 Août 2018 18:26:50
> Objet: Re:
gt;>> It also seems to make sense to enable pdpe1gb
>
> is it related to a vulnerability ?
No.
> it's already possible to use hugepage currently with "hugepages: <1024 | 2 |
> any>". But it's only on the qemu/hostside.
> I think pdpe1gb expose hugepage inside th
Hello,
after researching l1tf mitigation for qemu and reading
https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/
It seems pve misses at least the following cpu flag:
ssbd
It also seems to make sense to enable pdpe1gb
At least ssbd is important for
Am 24.07.2018 um 09:25 schrieb Dietmar Maurer:
>> Am 23.07.2018 um 21:04 schrieb Alexandre DERUMIER:
>>> Personally, I think that a vm could take all cpu usage, or memory, and
>>> impact ceph cluster for other vms.
>>>
>>> we should give ceph some kind of (configurable) priority.
>>
>> Yes the
ve-devel] Hyperconverged Cloud / Qemu + Ceph on same node
>
> I am not sure CPU pinning helps. What problem do you want to solve
> exactly?
>
>> maybe could we use cgroups ? (in ceph systemd units)
>>
>> we already use them fo vm && ct (shares cpu option
Hello,
after listening / reading:
https://www.openstack.org/videos/vancouver-2018/high-performance-ceph-for-hyper-converged-telco-nfv-infrastructure
and
https://www.youtube.com/watch?v=0_V-L7_CDTs=youtu.be
and
https://arxiv.org/pdf/1802.08102.pdf
I was thinking about creating a Proxmox based
Am 10.01.2018 um 08:08 schrieb Fabian Grünbichler:
> On Tue, Jan 09, 2018 at 08:47:10PM +0100, Stefan Priebe - Profihost AG wrote:
>> Am 09.01.2018 um 19:25 schrieb Fabian Grünbichler:
>>> On Tue, Jan 09, 2018 at 04:31:40PM +0100, Stefan Priebe - Profihost AG
>>> wro
Am 09.01.2018 um 19:25 schrieb Fabian Grünbichler:
> On Tue, Jan 09, 2018 at 04:31:40PM +0100, Stefan Priebe - Profihost AG wrote:
>>
>> Am 09.01.2018 um 16:18 schrieb Fabian Grünbichler:
>>> On Tue, Jan 09, 2018 at 02:58:24PM +0100, Fabian Grünbichler wrote:
>>&g
Am 09.01.2018 um 16:18 schrieb Fabian Grünbichler:
> On Tue, Jan 09, 2018 at 02:58:24PM +0100, Fabian Grünbichler wrote:
>> On Mon, Jan 08, 2018 at 09:34:57PM +0100, Stefan Priebe - Profihost AG wrote:
>>> Hello,
>>>
>>> for meltdown mitigation and perform
Greets,
Stefan
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "aderumier" <aderum...@odiso.com>
> Cc: "pve-devel" <pve-devel@pve.proxmox.com>
> Envoyé: Mardi 9 Janvier 2018 14:32:27
&g
Am 09.01.2018 um 14:24 schrieb Alexandre DERUMIER:
> ok thanks !
i've the first customers having impacts. Current badest example: instead
of 12% cpu load 75% cpu load in guest.
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
fore now model qemu64,+pcid:
real0m13.870s
user0m7.128s
sys 0m6.697s
qemu64:
real 0m25.214s
user 0m16.923s
sys 0m8.956s
Stefan
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "aderumier" <aderum...@
Am 09.01.2018 um 12:55 schrieb Alexandre DERUMIER:
>>> Yes - see an example which does a lot of syscalls:
>
> and for qemu64 ? (is it possible to sent +pcid too ?)
You mean:
-cpu qemu64,+pcid
?
Stefan
> - Mail original -----
> De: "Stefan Priebe, Profihost AG&qu
done
...
real0m26.614s
user0m17.548s
sys 0m9.056s
kvm started with +pcid:
# time for i in $(seq 1 1 50); do du -sx /; done
...
real0m14.734s
user0m7.755s
sys 0m6.973s
Greets,
Stefan
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG"
" models. I'm not sure of
> a way to fix it; probably it just has to be documented."
That's bad as pcid is very important to performance for meltdown fixes
in the linux kernel.
Stefan
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@p
omething like that.
> But I'm not sure it's easy to maintain. (maybe some user want to keep an old
> qemu version for example, with last qemu-server code)
>
> As alternative, we could patch qemu to have something like version : 2.11.x.y
> (where x is the minor version from qemu, and
> or add code to manage custom cpuflags and add a checkbox in cpu options ?
>
>
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "pve-devel" <pve-devel@pve.proxmox.com>
> Envoyé: Lundi 8 Janvier
Am 08.01.2018 um 22:11 schrieb Michael Rasmussen:
> On Mon, 8 Jan 2018 21:34:57 +0100
> Stefan Priebe - Profihost AG <s.pri...@profihost.ag> wrote:
>
>> Hello,
>>
>> for meltdown mitigation and performance it's important to have the pcid
>> flag
Hello,
for meltdown mitigation and performance it's important to have the pcid
flag passed down to the guest (f.e.
https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU).
My host shows the flag:
# grep ' pcid ' /proc/cpuinfo | wc -l
56
But the guest does not:
# grep pcid
two CVEs (one of which is
> "Meltdown"), yes.
>
> Paolo
>
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "pve-devel" <pve-devel@pve.proxmox.com>
> Envoyé: Jeudi 4 Janvier 2018 19:25:44
&
] [
> https://www.facebook.com/monsiteestlent ]
>
> [ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog dédié à la
> webperformance et la gestion de pics de trafic
>
> - Mail original -
> De: "Fabian Grünbichler" <f.gruenbich...@proxmox.com
Hello,
as far as i can see at least SuSE updated qemu for Meltdown and Spectre
to provide CPUID information to the guest.
I think we need to patch qemu as well asap? Has anybody found the
relevant patches?
https://www.pro-linux.de/sicherheit/2/41859/preisgabe-von-informationen-in-qemu.html
Hello,
happy new year to all!
Currently i'm trying to fix a problem where we have "random" missing
packets.
We're doing an ssh connect from machine a to machine b every 5 minutes
via rsync and ssh.
Sometimes it happens that we get this cron message:
"Connection to 192.168.0.2 closed by remote
;> [ ok ] Restarting pveproxy (via systemctl): pveproxy.service.
>>
>> Kind regards,
>> Caspar
>>
>> 2017-11-09 13:47 GMT+01:00 Thomas Lamprecht <t.lampre...@proxmox.com>:
>>
>>>> On 11/09/2017 01:40 PM, Stefan Priebe - Profihost AG
Sorry for the late reply. Is there a chance to get the fix backported to 4.4?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 09.11.2017 um 13:40 schrieb Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>
> *arg* sorry about that and thanks for r
*arg* sorry about that and thanks for resending your last paragraph. Yes
that's exactly the point.
Also thanks for the restart and systemctl explanation.
Greets,
Stefan
Am 09.11.2017 um 13:35 schrieb Thomas Lamprecht:
> Hi,
>
> On 11/09/2017 01:08 PM, Stefan Priebe - Profihost AG wrot
08:21 AM, Stefan Priebe - Profihost AG wrote:
>> Am 21.09.2017 um 15:30 schrieb Thomas Lamprecht:
>>> On 09/20/2017 01:26 PM, Stefan Priebe - Profihost AG wrote:
>>>>> [snip]
>>>> thanks for the reply. Does this also apply to PVE 4? Sorry i missed that
>
Hello,
any news on this? Is this expected?
Thanks,
Stefan
Am 27.09.2017 um 08:21 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> Am 21.09.2017 um 15:30 schrieb Thomas Lamprecht:
>> On 09/20/2017 01:26 PM, Stefan Priebe - Profihost AG wrote:
>>> Hi,
>>>
>&
Hi,
Am 21.09.2017 um 15:30 schrieb Thomas Lamprecht:
> On 09/20/2017 01:26 PM, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>>
>> Am 20.09.2017 um 10:36 schrieb Thomas Lamprecht:
>>> On 09/20/2017 06:40 AM, Stefan Priebe - Profihost AG wrote:
>>>&
Hi,
Am 20.09.2017 um 10:36 schrieb Thomas Lamprecht:
> On 09/20/2017 06:40 AM, Stefan Priebe - Profihost AG wrote:
>> Nobody?
>>
>
> We register the restart command from pveproxy with the $use_hup parameter,
> this then send a SIGHUP when calling pveproxy resta
Nobody?
Stefan
Excuse my typo sent from my mobile phone.
> Am 12.09.2017 um 09:27 schrieb Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>
> Hello,
>
> pveproxy has already a reload command - which seems to reopen the logs
> correctly. Is there any
Hello,
pveproxy has already a reload command - which seems to reopen the logs
correctly. Is there any reason why restart is used in the postrotate
part of logrotate?
Greets,
Stefan
Am 12.09.2017 um 09:13 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> we're heavily using the pve ap
Hello,
we're heavily using the pve api - doing a lot of calls every few minutes.
Currently the log rotation does a pveproxy restart which makes the API
unavailable for a few seconds. This is pretty bad.
I see the following solution:
- add a special signal like SIGUSR1 to pveproxy and spiceproxy
ill report back if it
happens again.
Thanks!
Greets,
Stefan
>
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "pve-devel" <pve-devel@pve.proxmox.com>
> Envoyé: Mardi 22 Août 2017 08:04:18
> Objet: [pve
Hello,
while running PVE 4.4 i'm observing lost tcp packets on a new cluster.
Two VMs are connected to the same bridge on the same host using virtio
and linux as a guest.
But the target guest does not receive all tcp packets some get lost. I
see them leaving on the source using tcpdump but they
Hello,
Am 11.07.2017 um 16:40 schrieb Thomas Lamprecht:
> commit 85909c04c49879f5fffa366fc3233eee2b157e97 switched from cirrus
> to vga for non windows OSs.
>
> This adds an artificial blocker on live migrations from PVE 4.X to
> PVE 5.0.
> Address it in PVE 4.4 by explicitly setting cirrus in
-Viewer Version 1.00
I haven't checked spice-client-gtk at all. I always used remote-viewer
from virt-viewer.
Greets,
Stefan
Am 05.07.2017 um 11:36 schrieb Thomas Lamprecht:
> Hi,
>
> On 07/04/2017 08:40 AM, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>>
Hello,
the following commit fixed it for me.
commit 6ddcde393f8bdbb5aaf3d347213bf819c788478b
Author: Stefan Priebe <s.pri...@profihost.ag>
Date: Tue Jul 4 11:14:19 2017 +0200
PVE/HTTPServer: add CONNECT method for spice
diff --git a/PVE/HTTPServer.pm b/PVE/HTTPServer.pm
index a
Hello,
i'm doing some experiments with the spiceproxy. Currently just trying to
get it working.
After downloading a spice connection file from the web gui. remote
viewer stops with:
"failed to connect HTTP proxy connection failed: 501 method 'CONNECT'
not available"
The access.log of the spice
Am 27.04.2017 um 19:12 schrieb Dietmar Maurer:
> Hi Stefan,
>
> is this already solved? Or do you still observe that problem?
Thanks for asking. It seems to be a client related timeout. Not on PVE
side. But what i did not understand is that i don't see it if i raise
$ua->timeout( $secs );
Hello,
while stress testing the pve api i'm seeing pretty often a "500 read
timeout" response to my simple GET requests against the API.
Around 1 out of 50 requests firing one request every 200ms wait for
answer fire next one.
That one is coming from $response->status_line of a
THX
Stefan
Am 05.04.2017 um 12:11 schrieb Wolfgang Bumiller:
> applied to both master and stable-4
>
> On Tue, Apr 04, 2017 at 04:43:31PM +0200, Thomas Lamprecht wrote:
>> From: Stefan Priebe <s.pri...@profihost.ag>
>>
>> This allows us to use management soft
Hi,
i have sometimes problems with the pve api - just closing the connection
or giving me a timeout and i wanted to debug this.
I wanted to raise the workers - but wasn't able to find an option to
change the workers of pveproxy and or pvedaemon.
Greets,
Stefan
Never mind.I found the culprit the file is just read to early by other nodes.
Stefan
Excuse my typo sent from my mobile phone.
> Am 22.03.2017 um 15:19 schrieb Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>
> Hi,
>
> this works fine with /var/log/pve/tas
e anything special
with the migration log? All others get the correct state of OK.
Greets
Stefan
Am 11.03.2017 um 08:28 schrieb Dietmar Maurer:
>
>
>> On March 10, 2017 at 9:24 PM Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag> wrote:
>>
>>
>&
Hi Thomas,
thanks and yes if you will do a V5 it would be great!
Stefan
Am 21.03.2017 um 10:46 schrieb Thomas Lamprecht:
> Hi,
>
> On 03/20/2017 03:11 PM, Stefan Priebe wrote:
>> This allows us to use management software for files inside of /etc/pve.
>> f.e. saltstack whi
This allows us to use management software for files inside of /etc/pve.
f.e. saltstack which rely on being able to set uid,gid and chmod
Reviewed-by: Thomas Lamprecht <t.lampre...@proxmox.com>
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
data/src/p
V4:
- allow chmod for priv path as well
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Am 19.03.2017 um 21:42 schrieb Dietmar Maurer:
>> To me the main question is why does pve-cluster provide a default of 0
>> which disables iptables for bridges and makes pve-firewall useless for
>> linux bridges.
>
> AFAIR this is for performance reasons ...
sure but pve-firewall isn't working
Hi,
Am 19.03.2017 um 14:44 schrieb Dietmar Maurer:
>> After digging around for some weeks i found out that the chain FORWARD
>> does not receive packets anymore?
>
> And hints in syslog?
No the reason is simply that
net.bridge.bridge-nf-call-iptables
is 0 again. Most probably because
Hello list,
i'm going crazy with a problem i don't understand.
After some time the pve-firewall stops working to me. It doesn't filter
any packets anymore. If i restart pve-firewall everything is fine again.
After digging around for some weeks i found out that the chain FORWARD
does not receive
Great thanks. Is there any reason why we don't use the existing pmxcfs for that
path as well?
Stefan
Excuse my typo sent from my mobile phone.
> Am 11.03.2017 um 08:28 schrieb Dietmar Maurer <diet...@proxmox.com>:
>
>
>
>> On March 10, 2017 at 9:24 PM Stefan Priebe
Am 10.03.2017 um 21:20 schrieb Dietmar Maurer:
>> Sure. Great. So there's no problem if all files got shared between
>> nodes?
>
> Sorry, but I never tested that.
>
>> I've never looked at the code for the active and index files...
>
> I guess you would need some cluster wide locking, or use
Am 10.03.2017 um 20:54 schrieb Dietmar Maurer:
>> Is there any reason to no run /var/log/pve on ocfs2? So that it is
>> shared over all nodes?
>
> never tried. But I guess it is no problem as long as ocfs2 works (cluster
> quorate).
Sure. Great. So there's no problem if all files got shared
Hello,
i don't like that i don't have a complete task history of a VM after
migration.
Is there any reason to no run /var/log/pve on ocfs2? So that it is
shared over all nodes?
Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
thanks for review. V4 sent.
Stefan
Am 10.03.2017 um 10:20 schrieb Thomas Lamprecht:
> small comment inline,
>
> On 03/09/2017 08:17 PM, Stefan Priebe wrote:
>> This allows us to use management software for files inside of /etc/pve.
>> f.e. saltstack which rely on bein
This allows us to use management software for files inside of /etc/pve.
f.e. saltstack which rely on being able to set uid,gid and chmod
Reviewed-by: Thomas Lamprecht <t.lampre...@proxmox.com>
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
data/src/p
fixed the intend in fuse fuse_operations as well
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
This allows us to use management software for files inside of /etc/pve.
f.e. saltstack which rely on being able to set uid,gid and chmod
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
data/src/pmxcfs.c | 33 -
1 file changed, 32 insertions(+), 1 de
Hello Thomas,
Am 09.03.2017 um 18:09 schrieb Thomas Lamprecht:
> On 03/09/2017 05:26 PM, Stefan Priebe wrote:
>> This allows us to use management software for files inside of /etc/pve.
>> f.e. saltstack which rely on being able to set uid,gid and chmod
>>
>> Signed-o
This allows us to use management software for files inside of /etc/pve.
f.e. saltstack which rely on being able to set uid,gid and chmod
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
data/src/pmxcfs.c | 33 -
1 file changed, 32 insertions(+), 1 de
dy has / supports.
At least saltstack always sets chmod and chown values and fails it it
can't. Now it believes that it was successful while providing salt with
the correct values:
user: root
group: www-date
chmod 0640
Greets,
Stefan
>
>> On March 9, 2017 at 5:26 PM Stefan Priebe <s.pri...@p
This allows us to use management software for files inside of /etc/pve.
f.e. saltstack which rely on being able to set uid,gid and chmod
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
data/src/pmxcfs.c | 41 -
1 file changed, 40 insertions
To avoid cycling deps an alternative might be forward decleration.
http://www.perlmonks.org/?node_id=1057957
Stefan
Excuse my typo sent from my mobile phone.
Am 06.02.2017 um 18:38 schrieb Dietmar Maurer :
>> An alternative might be
>> http://perldoc.perl.org/autouse.html
An alternative might be
http://perldoc.perl.org/autouse.html
Stefan
Excuse my typo sent from my mobile phone.
> Am 06.02.2017 um 18:20 schrieb Dietmar Maurer :
>
> is there really no other way to solve this issue?
>
> ___
>
ine:
>
> On Mon, Feb 06, 2017 at 11:25:44AM +0100, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> after upgrading my test cluster to latest git versions from 4.3. I've no
>> working firewall rules anymore. All chains contain an ACCEPT rule. But
>> i'm not
Hi,
after upgrading my test cluster to latest git versions from 4.3. I've no
working firewall rules anymore. All chains contain an ACCEPT rule. But
i'm not sure whether this was also the case with 4.3. But it breaks the
rules.
The chains is this one:
# iptables -L tap137i0-IN -vnx
Chain
Hi,
W: Failed to fetch
http://download.proxmox.com/debian/dists/jessie/pve-no-subscription/binary-amd64/Packages
Hash Sum mismatch
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Am 19.12.2016 um 08:40 schrieb Fabian Grünbichler:
> On Mon, Dec 19, 2016 at 07:23:35AM +0100, Stefan Priebe - Profihost AG wrote:
>> Anything wrong or a bug?
>>
>> Greets,
>> Stefan
>
> nothing wrong. unlocking a VM is possible with the special "qm unloc
I think you can simply remove it. It's already upstream and I'm not sure if
there are users in 3.10.
Thanks.
Stefan
Excuse my typo sent from my mobile phone.
> Am 19.12.2016 um 07:08 schrieb Dietmar Maurer :
>
> Hi Stefan,
>
> I just updated the 3.10.0 kernel:
>
>
Anything wrong or a bug?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 16.12.2016 um 22:19 schrieb Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>
> Hello,
>
> is there a way to unlock a VM through the API?
>
> I tried it this way bu
Hello,
is there a way to unlock a VM through the API?
I tried it this way but this does not work:
pve:/nodes/testnode1/qemu/100> set config -delete lock
VM is locked (migrate)
Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Hello,
after upgrading a PVE cluster from 3.4 to 4.4 i have some higher volume
vms which have high ping times even when they're on the same node and
slow network speed tested with iperf.
Has anybody seen something like this before?
--- 192.168.0.11 ping statistics ---
20 packets transmitted, 20
Hi,
since starting to upgrade some nodes to PVE 4.x i've seen that a lot of
them have a failed service watchdog-mux.
Is there any reasons why this one is enabled by default?
# systemctl --failed
UNIT LOAD ACTIVE SUBDESCRIPTION
● watchdog-mux.service loaded failed failed
Am 29.11.2016 um 10:29 schrieb Dietmar Maurer:
>> So it seems that the whole firewall breaks if there is somewhere
>> something wrong.
>>
>> I think especially for the firewall it's important to jsut skip that
>> line but process all other values.
>
> That is how it should work. If there is a
Am 29.11.2016 um 10:24 schrieb Fabian Grünbichler:
> On Tue, Nov 29, 2016 at 10:10:53AM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> today i've noticed that the firewall is nearly inactive on a node.
>>
>> systemctl status says:
>> Nov 29 10
-v4_swap PVEFW-120-letsencrypt-v4
flush PVEFW-120-letsencrypt-v4_swap
destroy PVEFW-120-letsencrypt-v4_swap
which fails:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid
Stefan
Am 29.11.2016 um 10:10 schrieb Stefan Priebe - Profihost
Hello,
today i've noticed that the firewall is nearly inactive on a node.
systemctl status says:
Nov 29 10:07:05 node2 pve-firewall[2534]: status update error:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid
Nov 29 10:07:14 node2
Am 22.11.2016 um 14:38 schrieb Fabian Grünbichler:
> On Tue, Nov 22, 2016 at 01:57:47PM +0100, Fabian Grünbichler wrote:
>> On Tue, Nov 22, 2016 at 01:09:17PM +0100, Stefan Priebe - Profihost AG wrote:
>>> Hi,
>>>
>>> Am 22.11.2016 um 12:26 schrieb Fabian Gr
Hi,
Am 22.11.2016 um 12:26 schrieb Fabian Grünbichler:
> On Tue, Nov 22, 2016 at 12:11:22PM +0100, Stefan Priebe - Profihost AG wrote:
>> Am 22.11.2016 um 11:56 schrieb Dietmar Maurer:
>>> I think this commit should solve the issue:
>>>
>>> https://git.proxmox
oot certificate, in PEM format)"
With the full chain it's not working. I then removed the whole chain and
only putted my final crt into that one and now it's working fine. With
the full chain $depth was 2 in my case.
Greets,
Stefan
>>> On November 22, 2016 at 11:49 AM Stefan Priebe - Pr
Hi,
while using a custom certificate was working fine for me with 3. I'm
getting the following error message if i'm connected to node X and want
to view the hw tab of a VM running on node Y.
596 ssl3_get_server_certificate: certificate verify failed
Request
ignore me my fault...
Stefan
Am 22.11.2016 um 10:15 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> in the past / with V3 i was able to move qemu-server VM config files
> around simply with mv.
>
> Under v4 it seems this no longer works the file automagically move to
&g
Hi,
in the past / with V3 i was able to move qemu-server VM config files
around simply with mv.
Under v4 it seems this no longer works the file automagically move to
their old location.
Here an example:
[node4 ~]# for VM in $(ps aux|grep "kvm"|grep -- "-id"|sed -e "s/.*-id
//" -e "s/ .*//"); do
Am 17.11.2016 um 08:42 schrieb Fabian Grünbichler:
> On Thu, Nov 17, 2016 at 07:01:24AM +0100, Dietmar Maurer wrote:
>>> It is really hard to review patches without descriptions. Please
>>> can you add minimal information?
>>
>> Oh, just saw you sent that in a separate mail - please ignore me!
>
Am 17.11.2016 um 07:33 schrieb Dietmar Maurer:
> AFAIK we only protect base volumes, and we 'unprotect' that in the code.
> So what is the purpose of this patch?
>
good question ;-) I just remember that i had this situation where i did
clones from snapshots which results in protected snapshots.
Am 17.11.2016 um 07:20 schrieb Dietmar Maurer:
>> While blessing it is good practise to provide the class. This makes it also
>> possible to use
>> QemuServer as a base / parent class.
>
> Why do you want another class (QemuServer) as base?
We've a custom class PHQemuServer which has
, ext4, btrfs, ...
So there are several cases where you want to shrink a volume. Without a
downtime of the server.
Greets,
Stefan
>
>> On November 16, 2016 at 8:13 PM Stefan Priebe <s.pri...@profihost.ag> wrote:
>>
>>
>> Signed-off-by: Stefan Priebe <s.p
h.com/issues/15991
> https://github.com/ceph/ceph/pull/9408
Both are closed or marked as refused so i don't think this will change
in the future.
Greets,
Stefan
> - Mail original -----
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "pv
This took me some time that a custom modification was preventing a whole plugin
to fail loading.
The warn also hides in the systemctl status -l / journal log. I think dying is
better if a plugin
contains an error.
[PATCH] VZDump: die with error if plugin loading fails
1 - 100 of 1160 matches
Mail list logo