Re: [pve-devel] vm deletion succeeds even if storage deletion fails

2019-01-16 Thread Stefan Priebe - Profihost AG
Hi, Am 15.01.19 um 08:19 schrieb Stefan Priebe - Profihost AG: > > Am 15.01.19 um 08:15 schrieb Fabian Grünbichler: >> On Mon, Jan 14, 2019 at 11:04:29AM +0100, Stefan Priebe - Profihost AG wrote: >>> Hello, >>> >>> today i was wondering about so

Re: [pve-devel] vm deletion succeeds even if storage deletion fails

2019-01-14 Thread Stefan Priebe - Profihost AG
Am 15.01.19 um 08:15 schrieb Fabian Grünbichler: > On Mon, Jan 14, 2019 at 11:04:29AM +0100, Stefan Priebe - Profihost AG wrote: >> Hello, >> >> today i was wondering about some disk images while the vm was deleted. >> >> Inspecting the task history i found this

[pve-devel] vm deletion succeeds even if storage deletion fails

2019-01-14 Thread Stefan Priebe - Profihost AG
Hello, today i was wondering about some disk images while the vm was deleted. Inspecting the task history i found this log: trying to acquire cfs lock 'storage-cephstoroffice' ... trying to acquire cfs lock 'storage-cephstoroffice' ... trying to acquire cfs lock 'storage-cephstoroffice' ...

Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Stefan Priebe - Profihost AG
Hi Alexandre, Am 08.01.19 um 21:55 schrieb Alexandre DERUMIER: >>> But, file_set_contents - which save_clusterfw_conf uses - does this >>> already[0], >>> so maybe this is the "high-level fuse rename isn't atomic" bug again... >>> May need to take a closer look tomorrow. > > mmm, ok. > > In

[pve-devel] tunnel replied 'ERR: resume failed - unable to find configuration file for VM 214 - no such machine' to command 'resume 214'

2018-10-23 Thread Stefan Priebe - Profihost AG
Hello, since upgrading to PVE 5 i'm seeing the following error on a regular basis - while stress testing migration (doing 100-200 migrations in a row): 2018-10-23 13:54:42 migration speed: 384.00 MB/s - downtime 113 ms 2018-10-23 13:54:42 migration status: completed 2018-10-23 13:54:42 ERROR:

Re: [pve-devel] Unit 240.scope already exists.

2018-09-12 Thread Stefan Priebe - Profihost AG
Am 12.09.2018 um 10:19 schrieb Wolfgang Bumiller: > On Wed, Sep 12, 2018 at 08:29:02AM +0200, Stefan Priebe - Profihost AG wrote: >> Hello, >> >> i don't know whether this is a known bug but since Proxmox V5 i have >> seen the following error message several times wh

[pve-devel] Unit 240.scope already exists.

2018-09-12 Thread Stefan Priebe - Profihost AG
Hello, i don't know whether this is a known bug but since Proxmox V5 i have seen the following error message several times while trying to start a vm after shutdown: ERROR: start failed: org.freedesktop.systemd1.UnitExists: Unit 240.scope already exists I had a similiar problem under our ubuntu

Re: [pve-devel] qemu config and balloon

2018-09-04 Thread Stefan Priebe - Profihost AG
Am 04.09.2018 um 10:10 schrieb Thomas Lamprecht: > On 9/4/18 10:00 AM, Dominik Csapak wrote: >> On 09/04/2018 09:24 AM, Stefan Priebe - Profihost AG wrote: >>> I searched the PVE docs but didn't find anything. >> [snip] >> also in the qm manpage under 'Me

[pve-devel] qemu config and balloon

2018-09-04 Thread Stefan Priebe - Profihost AG
Hello list, can anybody enlighten me where the difference between: balloon: 0 and balloon not defined at all in the config file is? I searched the PVE docs but didn't find anything. It seems balloon: 0 is set if ballooning is disabled. And if the balloon value is identical to the memory value

Re: [pve-devel] [PATCH pve-docs] add ibpb, ssbd, virt-ssbd, amd-ssbd, amd-no-ssb, pdpe1gb cpu flags

2018-08-28 Thread Stefan Priebe - Profihost AG
Am 28.08.2018 um 10:47 schrieb Thomas Lamprecht: > On 8/27/18 7:50 PM, Stefan Priebe - Profihost AG wrote: >> I'm using them as a default since 2 weeks. No problems so far. >> > > for the backend this is probably OK. > > The GUI part isn't as easy to make sane. >

Re: [pve-devel] [PATCH pve-docs] add ibpb, ssbd, virt-ssbd, amd-ssbd, amd-no-ssb, pdpe1gb cpu flags

2018-08-27 Thread Stefan Priebe - Profihost AG
I'm using them as a default since 2 weeks. No problems so far. Greets, Stefan Am 27.08.2018 um 18:01 schrieb Alexandre DERUMIER: > any comments to add theses cpu flags ? > > > - Mail original - > De: "aderumier" > À: "pve-devel" > Envoyé: Lundi 20 Août 2018 18:26:50 > Objet: Re:

Re: [pve-devel] missing cpu flags? (CVE-2018-3639)

2018-08-20 Thread Stefan Priebe - Profihost AG
gt;>> It also seems to make sense to enable pdpe1gb > > is it related to a vulnerability ? No. > it's already possible to use hugepage currently with "hugepages: <1024 | 2 | > any>". But it's only on the qemu/hostside. > I think pdpe1gb expose hugepage inside th

[pve-devel] missing cpu flags? (CVE-2018-3639)

2018-08-17 Thread Stefan Priebe - Profihost AG
Hello, after researching l1tf mitigation for qemu and reading https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/ It seems pve misses at least the following cpu flag: ssbd It also seems to make sense to enable pdpe1gb At least ssbd is important for

Re: [pve-devel] Hyperconverged Cloud / Qemu + Ceph on same node

2018-07-24 Thread Stefan Priebe - Profihost AG
Am 24.07.2018 um 09:25 schrieb Dietmar Maurer: >> Am 23.07.2018 um 21:04 schrieb Alexandre DERUMIER: >>> Personally, I think that a vm could take all cpu usage, or memory, and >>> impact ceph cluster for other vms. >>> >>> we should give ceph some kind of (configurable) priority. >> >> Yes the

Re: [pve-devel] Hyperconverged Cloud / Qemu + Ceph on same node

2018-07-24 Thread Stefan Priebe - Profihost AG
ve-devel] Hyperconverged Cloud / Qemu + Ceph on same node > > I am not sure CPU pinning helps. What problem do you want to solve > exactly? > >> maybe could we use cgroups ? (in ceph systemd units) >> >> we already use them fo vm && ct (shares cpu option

[pve-devel] Hyperconverged Cloud / Qemu + Ceph on same node

2018-07-23 Thread Stefan Priebe - Profihost AG
Hello, after listening / reading: https://www.openstack.org/videos/vancouver-2018/high-performance-ceph-for-hyper-converged-telco-nfv-infrastructure and https://www.youtube.com/watch?v=0_V-L7_CDTs=youtu.be and https://arxiv.org/pdf/1802.08102.pdf I was thinking about creating a Proxmox based

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
Am 10.01.2018 um 08:08 schrieb Fabian Grünbichler: > On Tue, Jan 09, 2018 at 08:47:10PM +0100, Stefan Priebe - Profihost AG wrote: >> Am 09.01.2018 um 19:25 schrieb Fabian Grünbichler: >>> On Tue, Jan 09, 2018 at 04:31:40PM +0100, Stefan Priebe - Profihost AG >>> wro

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
Am 09.01.2018 um 19:25 schrieb Fabian Grünbichler: > On Tue, Jan 09, 2018 at 04:31:40PM +0100, Stefan Priebe - Profihost AG wrote: >> >> Am 09.01.2018 um 16:18 schrieb Fabian Grünbichler: >>> On Tue, Jan 09, 2018 at 02:58:24PM +0100, Fabian Grünbichler wrote: >>&g

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
Am 09.01.2018 um 16:18 schrieb Fabian Grünbichler: > On Tue, Jan 09, 2018 at 02:58:24PM +0100, Fabian Grünbichler wrote: >> On Mon, Jan 08, 2018 at 09:34:57PM +0100, Stefan Priebe - Profihost AG wrote: >>> Hello, >>> >>> for meltdown mitigation and perform

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
Greets, Stefan > > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag> > À: "aderumier" <aderum...@odiso.com> > Cc: "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Mardi 9 Janvier 2018 14:32:27 &g

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
Am 09.01.2018 um 14:24 schrieb Alexandre DERUMIER: > ok thanks ! i've the first customers having impacts. Current badest example: instead of 12% cpu load 75% cpu load in guest. > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
fore now model qemu64,+pcid: real0m13.870s user0m7.128s sys 0m6.697s qemu64: real 0m25.214s user 0m16.923s sys 0m8.956s Stefan > > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag> > À: "aderumier" <aderum...@

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
Am 09.01.2018 um 12:55 schrieb Alexandre DERUMIER: >>> Yes - see an example which does a lot of syscalls: > > and for qemu64 ? (is it possible to sent +pcid too ?) You mean: -cpu qemu64,+pcid ? Stefan > - Mail original ----- > De: "Stefan Priebe, Profihost AG&qu

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
done ... real0m26.614s user0m17.548s sys 0m9.056s kvm started with +pcid: # time for i in $(seq 1 1 50); do du -sx /; done ... real0m14.734s user0m7.755s sys 0m6.973s Greets, Stefan > > - Mail original - > De: "Stefan Priebe, Profihost AG"

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
" models. I'm not sure of > a way to fix it; probably it just has to be documented." That's bad as pcid is very important to performance for meltdown fixes in the linux kernel. Stefan > > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@p

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-09 Thread Stefan Priebe - Profihost AG
omething like that. > But I'm not sure it's easy to maintain. (maybe some user want to keep an old > qemu version for example, with last qemu-server code) > > As alternative, we could patch qemu to have something like version : 2.11.x.y > (where x is the minor version from qemu, and

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-08 Thread Stefan Priebe - Profihost AG
> or add code to manage custom cpuflags and add a checkbox in cpu options ? > > > > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag> > À: "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Lundi 8 Janvier

Re: [pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-08 Thread Stefan Priebe - Profihost AG
Am 08.01.2018 um 22:11 schrieb Michael Rasmussen: > On Mon, 8 Jan 2018 21:34:57 +0100 > Stefan Priebe - Profihost AG <s.pri...@profihost.ag> wrote: > >> Hello, >> >> for meltdown mitigation and performance it's important to have the pcid >> flag

[pve-devel] cpuflag: pcid needed in guest for good performance after meltdown

2018-01-08 Thread Stefan Priebe - Profihost AG
Hello, for meltdown mitigation and performance it's important to have the pcid flag passed down to the guest (f.e. https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU). My host shows the flag: # grep ' pcid ' /proc/cpuinfo | wc -l 56 But the guest does not: # grep pcid

Re: [pve-devel] Updated qemu pkg needed for Meltdown and Spectre?

2018-01-04 Thread Stefan Priebe - Profihost AG
two CVEs (one of which is > "Meltdown"), yes. > > Paolo > > > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag> > À: "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Jeudi 4 Janvier 2018 19:25:44 &

Re: [pve-devel] Updated qemu pkg needed for Meltdown and Spectre?

2018-01-04 Thread Stefan Priebe - Profihost AG
] [ > https://www.facebook.com/monsiteestlent ] > > [ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog dédié à la > webperformance et la gestion de pics de trafic > > - Mail original - > De: "Fabian Grünbichler" <f.gruenbich...@proxmox.com

[pve-devel] Updated qemu pkg needed for Meltdown and Spectre?

2018-01-03 Thread Stefan Priebe - Profihost AG
Hello, as far as i can see at least SuSE updated qemu for Meltdown and Spectre to provide CPUID information to the guest. I think we need to patch qemu as well asap? Has anybody found the relevant patches? https://www.pro-linux.de/sicherheit/2/41859/preisgabe-von-informationen-in-qemu.html

[pve-devel] dropped pkts with Qemu on tap interace (RX)

2018-01-02 Thread Stefan Priebe - Profihost AG
Hello, happy new year to all! Currently i'm trying to fix a problem where we have "random" missing packets. We're doing an ssh connect from machine a to machine b every 5 minutes via rsync and ssh. Sometimes it happens that we get this cron message: "Connection to 192.168.0.2 closed by remote

Re: [pve-devel] pve api offline during log rotation

2017-12-12 Thread Stefan Priebe - Profihost AG
;> [ ok ] Restarting pveproxy (via systemctl): pveproxy.service. >> >> Kind regards, >> Caspar >> >> 2017-11-09 13:47 GMT+01:00 Thomas Lamprecht <t.lampre...@proxmox.com>: >> >>>> On 11/09/2017 01:40 PM, Stefan Priebe - Profihost AG

Re: [pve-devel] pve api offline during log rotation

2017-12-10 Thread Stefan Priebe - Profihost AG
Sorry for the late reply. Is there a chance to get the fix backported to 4.4? Greets, Stefan Excuse my typo sent from my mobile phone. > Am 09.11.2017 um 13:40 schrieb Stefan Priebe - Profihost AG > <s.pri...@profihost.ag>: > > *arg* sorry about that and thanks for r

Re: [pve-devel] pve api offline during log rotation

2017-11-09 Thread Stefan Priebe - Profihost AG
*arg* sorry about that and thanks for resending your last paragraph. Yes that's exactly the point. Also thanks for the restart and systemctl explanation. Greets, Stefan Am 09.11.2017 um 13:35 schrieb Thomas Lamprecht: > Hi, > > On 11/09/2017 01:08 PM, Stefan Priebe - Profihost AG wrot

Re: [pve-devel] pve api offline during log rotation

2017-11-09 Thread Stefan Priebe - Profihost AG
08:21 AM, Stefan Priebe - Profihost AG wrote: >> Am 21.09.2017 um 15:30 schrieb Thomas Lamprecht: >>> On 09/20/2017 01:26 PM, Stefan Priebe - Profihost AG wrote: >>>>> [snip] >>>> thanks for the reply. Does this also apply to PVE 4? Sorry i missed that >

Re: [pve-devel] pve api offline during log rotation

2017-11-07 Thread Stefan Priebe - Profihost AG
Hello, any news on this? Is this expected? Thanks, Stefan Am 27.09.2017 um 08:21 schrieb Stefan Priebe - Profihost AG: > Hi, > > Am 21.09.2017 um 15:30 schrieb Thomas Lamprecht: >> On 09/20/2017 01:26 PM, Stefan Priebe - Profihost AG wrote: >>> Hi, >>> >&

Re: [pve-devel] pve api offline during log rotation

2017-09-27 Thread Stefan Priebe - Profihost AG
Hi, Am 21.09.2017 um 15:30 schrieb Thomas Lamprecht: > On 09/20/2017 01:26 PM, Stefan Priebe - Profihost AG wrote: >> Hi, >> >> >> Am 20.09.2017 um 10:36 schrieb Thomas Lamprecht: >>> On 09/20/2017 06:40 AM, Stefan Priebe - Profihost AG wrote: >>>&

Re: [pve-devel] pve api offline during log rotation

2017-09-20 Thread Stefan Priebe - Profihost AG
Hi, Am 20.09.2017 um 10:36 schrieb Thomas Lamprecht: > On 09/20/2017 06:40 AM, Stefan Priebe - Profihost AG wrote: >> Nobody? >> > > We register the restart command from pveproxy with the $use_hup parameter, > this then send a SIGHUP when calling pveproxy resta

Re: [pve-devel] pve api offline during log rotation

2017-09-19 Thread Stefan Priebe - Profihost AG
Nobody? Stefan Excuse my typo sent from my mobile phone. > Am 12.09.2017 um 09:27 schrieb Stefan Priebe - Profihost AG > <s.pri...@profihost.ag>: > > Hello, > > pveproxy has already a reload command - which seems to reopen the logs > correctly. Is there any

Re: [pve-devel] pve api offline during log rotation

2017-09-12 Thread Stefan Priebe - Profihost AG
Hello, pveproxy has already a reload command - which seems to reopen the logs correctly. Is there any reason why restart is used in the postrotate part of logrotate? Greets, Stefan Am 12.09.2017 um 09:13 schrieb Stefan Priebe - Profihost AG: > Hello, > > we're heavily using the pve ap

[pve-devel] pve api offline during log rotation

2017-09-12 Thread Stefan Priebe - Profihost AG
Hello, we're heavily using the pve api - doing a lot of calls every few minutes. Currently the log rotation does a pveproxy restart which makes the API unavailable for a few seconds. This is pretty bad. I see the following solution: - add a special signal like SIGUSR1 to pveproxy and spiceproxy

Re: [pve-devel] lost tcp packets in bridge

2017-08-30 Thread Stefan Priebe - Profihost AG
ill report back if it happens again. Thanks! Greets, Stefan > > > - Mail original - > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag> > À: "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Mardi 22 Août 2017 08:04:18 > Objet: [pve

[pve-devel] lost tcp packets in bridge

2017-08-22 Thread Stefan Priebe - Profihost AG
Hello, while running PVE 4.4 i'm observing lost tcp packets on a new cluster. Two VMs are connected to the same bridge on the same host using virtio and linux as a guest. But the target guest does not receive all tcp packets some get lost. I see them leaving on the source using tcpdump but they

Re: [pve-devel] [RFC qemu-server stable-4] add workaround for pve 4.4 to 5.0 live migration

2017-07-11 Thread Stefan Priebe - Profihost AG
Hello, Am 11.07.2017 um 16:40 schrieb Thomas Lamprecht: > commit 85909c04c49879f5fffa366fc3233eee2b157e97 switched from cirrus > to vga for non windows OSs. > > This adds an artificial blocker on live migrations from PVE 4.X to > PVE 5.0. > Address it in PVE 4.4 by explicitly setting cirrus in

Re: [pve-devel] spiceproxy CONNECT method

2017-07-05 Thread Stefan Priebe - Profihost AG
-Viewer Version 1.00 I haven't checked spice-client-gtk at all. I always used remote-viewer from virt-viewer. Greets, Stefan Am 05.07.2017 um 11:36 schrieb Thomas Lamprecht: > Hi, > > On 07/04/2017 08:40 AM, Stefan Priebe - Profihost AG wrote: >> Hello, >> >>

Re: [pve-devel] spiceproxy CONNECT method

2017-07-04 Thread Stefan Priebe - Profihost AG
Hello, the following commit fixed it for me. commit 6ddcde393f8bdbb5aaf3d347213bf819c788478b Author: Stefan Priebe <s.pri...@profihost.ag> Date: Tue Jul 4 11:14:19 2017 +0200 PVE/HTTPServer: add CONNECT method for spice diff --git a/PVE/HTTPServer.pm b/PVE/HTTPServer.pm index a

[pve-devel] spiceproxy CONNECT method

2017-07-04 Thread Stefan Priebe - Profihost AG
Hello, i'm doing some experiments with the spiceproxy. Currently just trying to get it working. After downloading a spice connection file from the web gui. remote viewer stops with: "failed to connect HTTP proxy connection failed: 501 method 'CONNECT' not available" The access.log of the spice

Re: [pve-devel] stress testing pve api / pveproxy

2017-04-27 Thread Stefan Priebe - Profihost AG
Am 27.04.2017 um 19:12 schrieb Dietmar Maurer: > Hi Stefan, > > is this already solved? Or do you still observe that problem? Thanks for asking. It seems to be a client related timeout. Not on PVE side. But what i did not understand is that i don't see it if i raise $ua->timeout( $secs );

[pve-devel] stress testing pve api / pveproxy

2017-04-25 Thread Stefan Priebe - Profihost AG
Hello, while stress testing the pve api i'm seeing pretty often a "500 read timeout" response to my simple GET requests against the API. Around 1 out of 50 requests firing one request every 200ms wait for answer fire next one. That one is coming from $response->status_line of a

Re: [pve-devel] applied: [PATCH cluster v6] implement chown and chmod for user root group www-data and perm 0640

2017-04-05 Thread Stefan Priebe - Profihost AG
THX Stefan Am 05.04.2017 um 12:11 schrieb Wolfgang Bumiller: > applied to both master and stable-4 > > On Tue, Apr 04, 2017 at 04:43:31PM +0200, Thomas Lamprecht wrote: >> From: Stefan Priebe <s.pri...@profihost.ag> >> >> This allows us to use management soft

[pve-devel] PVE daemon / PVE proxy

2017-03-27 Thread Stefan Priebe - Profihost AG
Hi, i have sometimes problems with the pve api - just closing the connection or giving me a timeout and i wanted to debug this. I wanted to raise the workers - but wasn't able to find an option to change the workers of pveproxy and or pvedaemon. Greets, Stefan

Re: [pve-devel] clustered /var/log/pve

2017-03-22 Thread Stefan Priebe - Profihost AG
Never mind.I found the culprit the file is just read to early by other nodes. Stefan Excuse my typo sent from my mobile phone. > Am 22.03.2017 um 15:19 schrieb Stefan Priebe - Profihost AG > <s.pri...@profihost.ag>: > > Hi, > > this works fine with /var/log/pve/tas

Re: [pve-devel] clustered /var/log/pve

2017-03-22 Thread Stefan Priebe - Profihost AG
e anything special with the migration log? All others get the correct state of OK. Greets Stefan Am 11.03.2017 um 08:28 schrieb Dietmar Maurer: > > >> On March 10, 2017 at 9:24 PM Stefan Priebe - Profihost AG >> <s.pri...@profihost.ag> wrote: >> >> >&

Re: [pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-21 Thread Stefan Priebe - Profihost AG
Hi Thomas, thanks and yes if you will do a V5 it would be great! Stefan Am 21.03.2017 um 10:46 schrieb Thomas Lamprecht: > Hi, > > On 03/20/2017 03:11 PM, Stefan Priebe wrote: >> This allows us to use management software for files inside of /etc/pve. >> f.e. saltstack whi

[pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-20 Thread Stefan Priebe
This allows us to use management software for files inside of /etc/pve. f.e. saltstack which rely on being able to set uid,gid and chmod Reviewed-by: Thomas Lamprecht <t.lampre...@proxmox.com> Signed-off-by: Stefan Priebe <s.pri...@profihost.ag> --- data/src/p

[pve-devel] pve-cluster:i V4 implement chown and chmod for user root group www-data

2017-03-20 Thread Stefan Priebe
V4: - allow chmod for priv path as well ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] broken system / pve-firewall

2017-03-19 Thread Stefan Priebe - Profihost AG
Am 19.03.2017 um 21:42 schrieb Dietmar Maurer: >> To me the main question is why does pve-cluster provide a default of 0 >> which disables iptables for bridges and makes pve-firewall useless for >> linux bridges. > > AFAIR this is for performance reasons ... sure but pve-firewall isn't working

Re: [pve-devel] broken system / pve-firewall

2017-03-19 Thread Stefan Priebe - Profihost AG
Hi, Am 19.03.2017 um 14:44 schrieb Dietmar Maurer: >> After digging around for some weeks i found out that the chain FORWARD >> does not receive packets anymore? > > And hints in syslog? No the reason is simply that net.bridge.bridge-nf-call-iptables is 0 again. Most probably because

[pve-devel] broken system / pve-firewall

2017-03-18 Thread Stefan Priebe - Profihost AG
Hello list, i'm going crazy with a problem i don't understand. After some time the pve-firewall stops working to me. It doesn't filter any packets anymore. If i restart pve-firewall everything is fine again. After digging around for some weeks i found out that the chain FORWARD does not receive

Re: [pve-devel] clustered /var/log/pve

2017-03-10 Thread Stefan Priebe - Profihost AG
Great thanks. Is there any reason why we don't use the existing pmxcfs for that path as well? Stefan Excuse my typo sent from my mobile phone. > Am 11.03.2017 um 08:28 schrieb Dietmar Maurer <diet...@proxmox.com>: > > > >> On March 10, 2017 at 9:24 PM Stefan Priebe

Re: [pve-devel] clustered /var/log/pve

2017-03-10 Thread Stefan Priebe - Profihost AG
Am 10.03.2017 um 21:20 schrieb Dietmar Maurer: >> Sure. Great. So there's no problem if all files got shared between >> nodes? > > Sorry, but I never tested that. > >> I've never looked at the code for the active and index files... > > I guess you would need some cluster wide locking, or use

Re: [pve-devel] clustered /var/log/pve

2017-03-10 Thread Stefan Priebe - Profihost AG
Am 10.03.2017 um 20:54 schrieb Dietmar Maurer: >> Is there any reason to no run /var/log/pve on ocfs2? So that it is >> shared over all nodes? > > never tried. But I guess it is no problem as long as ocfs2 works (cluster > quorate). Sure. Great. So there's no problem if all files got shared

[pve-devel] clustered /var/log/pve

2017-03-10 Thread Stefan Priebe - Profihost AG
Hello, i don't like that i don't have a complete task history of a VM after migration. Is there any reason to no run /var/log/pve on ocfs2? So that it is shared over all nodes? Greets, Stefan ___ pve-devel mailing list pve-devel@pve.proxmox.com

Re: [pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-10 Thread Stefan Priebe - Profihost AG
thanks for review. V4 sent. Stefan Am 10.03.2017 um 10:20 schrieb Thomas Lamprecht: > small comment inline, > > On 03/09/2017 08:17 PM, Stefan Priebe wrote: >> This allows us to use management software for files inside of /etc/pve. >> f.e. saltstack which rely on bein

[pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-10 Thread Stefan Priebe
This allows us to use management software for files inside of /etc/pve. f.e. saltstack which rely on being able to set uid,gid and chmod Reviewed-by: Thomas Lamprecht <t.lampre...@proxmox.com> Signed-off-by: Stefan Priebe <s.pri...@profihost.ag> --- data/src/p

[pve-devel] V3 implement chown and chmod for user root group www-data and

2017-03-09 Thread Stefan Priebe
fixed the intend in fuse fuse_operations as well ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-09 Thread Stefan Priebe
This allows us to use management software for files inside of /etc/pve. f.e. saltstack which rely on being able to set uid,gid and chmod Signed-off-by: Stefan Priebe <s.pri...@profihost.ag> --- data/src/pmxcfs.c | 33 - 1 file changed, 32 insertions(+), 1 de

Re: [pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-09 Thread Stefan Priebe - Profihost AG
Hello Thomas, Am 09.03.2017 um 18:09 schrieb Thomas Lamprecht: > On 03/09/2017 05:26 PM, Stefan Priebe wrote: >> This allows us to use management software for files inside of /etc/pve. >> f.e. saltstack which rely on being able to set uid,gid and chmod >> >> Signed-o

[pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-09 Thread Stefan Priebe
This allows us to use management software for files inside of /etc/pve. f.e. saltstack which rely on being able to set uid,gid and chmod Signed-off-by: Stefan Priebe <s.pri...@profihost.ag> --- data/src/pmxcfs.c | 33 - 1 file changed, 32 insertions(+), 1 de

Re: [pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-09 Thread Stefan Priebe - Profihost AG
dy has / supports. At least saltstack always sets chmod and chown values and fails it it can't. Now it believes that it was successful while providing salt with the correct values: user: root group: www-date chmod 0640 Greets, Stefan > >> On March 9, 2017 at 5:26 PM Stefan Priebe <s.pri...@p

[pve-devel] [PATCH] implement chown and chmod for user root group www-data and perm 0640

2017-03-09 Thread Stefan Priebe
This allows us to use management software for files inside of /etc/pve. f.e. saltstack which rely on being able to set uid,gid and chmod Signed-off-by: Stefan Priebe <s.pri...@profihost.ag> --- data/src/pmxcfs.c | 41 - 1 file changed, 40 insertions

Re: [pve-devel] applied: [PATCH firewall] simulator: make lxc/qemu optional

2017-02-06 Thread Stefan Priebe - Profihost AG
To avoid cycling deps an alternative might be forward decleration. http://www.perlmonks.org/?node_id=1057957 Stefan Excuse my typo sent from my mobile phone. Am 06.02.2017 um 18:38 schrieb Dietmar Maurer : >> An alternative might be >> http://perldoc.perl.org/autouse.html

Re: [pve-devel] applied: [PATCH firewall] simulator: make lxc/qemu optional

2017-02-06 Thread Stefan Priebe - Profihost AG
An alternative might be http://perldoc.perl.org/autouse.html Stefan Excuse my typo sent from my mobile phone. > Am 06.02.2017 um 18:20 schrieb Dietmar Maurer : > > is there really no other way to solve this issue? > > ___ >

Re: [pve-devel] pve-firewall / current git master

2017-02-06 Thread Stefan Priebe - Profihost AG
ine: > > On Mon, Feb 06, 2017 at 11:25:44AM +0100, Stefan Priebe - Profihost AG wrote: >> Hi, >> >> after upgrading my test cluster to latest git versions from 4.3. I've no >> working firewall rules anymore. All chains contain an ACCEPT rule. But >> i'm not

[pve-devel] pve-firewall / current git master

2017-02-06 Thread Stefan Priebe - Profihost AG
Hi, after upgrading my test cluster to latest git versions from 4.3. I've no working firewall rules anymore. All chains contain an ACCEPT rule. But i'm not sure whether this was also the case with 4.3. But it breaks the rules. The chains is this one: # iptables -L tap137i0-IN -vnx Chain

[pve-devel] no subsciption repo hash sum mismatch

2016-12-28 Thread Stefan Priebe - Profihost AG
Hi, W: Failed to fetch http://download.proxmox.com/debian/dists/jessie/pve-no-subscription/binary-amd64/Packages Hash Sum mismatch Stefan ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] unlock VM through API?

2016-12-19 Thread Stefan Priebe - Profihost AG
Am 19.12.2016 um 08:40 schrieb Fabian Grünbichler: > On Mon, Dec 19, 2016 at 07:23:35AM +0100, Stefan Priebe - Profihost AG wrote: >> Anything wrong or a bug? >> >> Greets, >> Stefan > > nothing wrong. unlocking a VM is possible with the special "qm unloc

Re: [pve-devel] 3.10.0 add netpoll support to veth (patch removed)

2016-12-18 Thread Stefan Priebe - Profihost AG
I think you can simply remove it. It's already upstream and I'm not sure if there are users in 3.10. Thanks. Stefan Excuse my typo sent from my mobile phone. > Am 19.12.2016 um 07:08 schrieb Dietmar Maurer : > > Hi Stefan, > > I just updated the 3.10.0 kernel: > >

Re: [pve-devel] unlock VM through API?

2016-12-18 Thread Stefan Priebe - Profihost AG
Anything wrong or a bug? Greets, Stefan Excuse my typo sent from my mobile phone. > Am 16.12.2016 um 22:19 schrieb Stefan Priebe - Profihost AG > <s.pri...@profihost.ag>: > > Hello, > > is there a way to unlock a VM through the API? > > I tried it this way bu

[pve-devel] unlock VM through API?

2016-12-16 Thread Stefan Priebe - Profihost AG
Hello, is there a way to unlock a VM through the API? I tried it this way but this does not work: pve:/nodes/testnode1/qemu/100> set config -delete lock VM is locked (migrate) Greets, Stefan ___ pve-devel mailing list pve-devel@pve.proxmox.com

[pve-devel] low network speed and high ping times after upgrade to PVE 4.4

2016-12-13 Thread Stefan Priebe - Profihost AG
Hello, after upgrading a PVE cluster from 3.4 to 4.4 i have some higher volume vms which have high ping times even when they're on the same node and slow network speed tested with iperf. Has anybody seen something like this before? --- 192.168.0.11 ping statistics --- 20 packets transmitted, 20

[pve-devel] PVE 4.x watchdog-mux.service

2016-12-05 Thread Stefan Priebe - Profihost AG
Hi, since starting to upgrade some nodes to PVE 4.x i've seen that a lot of them have a failed service watchdog-mux. Is there any reasons why this one is enabled by default? # systemctl --failed UNIT LOAD ACTIVE SUBDESCRIPTION ● watchdog-mux.service loaded failed failed

Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
Am 29.11.2016 um 10:29 schrieb Dietmar Maurer: >> So it seems that the whole firewall breaks if there is somewhere >> something wrong. >> >> I think especially for the firewall it's important to jsut skip that >> line but process all other values. > > That is how it should work. If there is a

Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
Am 29.11.2016 um 10:24 schrieb Fabian Grünbichler: > On Tue, Nov 29, 2016 at 10:10:53AM +0100, Stefan Priebe - Profihost AG wrote: >> Hello, >> >> today i've noticed that the firewall is nearly inactive on a node. >> >> systemctl status says: >> Nov 29 10

Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
-v4_swap PVEFW-120-letsencrypt-v4 flush PVEFW-120-letsencrypt-v4_swap destroy PVEFW-120-letsencrypt-v4_swap which fails: ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the CIDR parameter of the IP address is invalid Stefan Am 29.11.2016 um 10:10 schrieb Stefan Priebe - Profihost

[pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
Hello, today i've noticed that the firewall is nearly inactive on a node. systemctl status says: Nov 29 10:07:05 node2 pve-firewall[2534]: status update error: ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the CIDR parameter of the IP address is invalid Nov 29 10:07:14 node2

Re: [pve-devel] Something missing in http://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x_and_newer) ?

2016-11-22 Thread Stefan Priebe - Profihost AG
Am 22.11.2016 um 14:38 schrieb Fabian Grünbichler: > On Tue, Nov 22, 2016 at 01:57:47PM +0100, Fabian Grünbichler wrote: >> On Tue, Nov 22, 2016 at 01:09:17PM +0100, Stefan Priebe - Profihost AG wrote: >>> Hi, >>> >>> Am 22.11.2016 um 12:26 schrieb Fabian Gr

Re: [pve-devel] Something missing in http://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x_and_newer) ?

2016-11-22 Thread Stefan Priebe - Profihost AG
Hi, Am 22.11.2016 um 12:26 schrieb Fabian Grünbichler: > On Tue, Nov 22, 2016 at 12:11:22PM +0100, Stefan Priebe - Profihost AG wrote: >> Am 22.11.2016 um 11:56 schrieb Dietmar Maurer: >>> I think this commit should solve the issue: >>> >>> https://git.proxmox

Re: [pve-devel] Something missing in http://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x_and_newer) ?

2016-11-22 Thread Stefan Priebe - Profihost AG
oot certificate, in PEM format)" With the full chain it's not working. I then removed the whole chain and only putted my final crt into that one and now it's working fine. With the full chain $depth was 2 in my case. Greets, Stefan >>> On November 22, 2016 at 11:49 AM Stefan Priebe - Pr

[pve-devel] Something missing in http://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x_and_newer) ?

2016-11-22 Thread Stefan Priebe - Profihost AG
Hi, while using a custom certificate was working fine for me with 3. I'm getting the following error message if i'm connected to node X and want to view the hw tab of a VM running on node Y. 596 ssl3_get_server_certificate: certificate verify failed Request

Re: [pve-devel] since upgrade v3 => v4 moving qemu-server config files do not work

2016-11-22 Thread Stefan Priebe - Profihost AG
ignore me my fault... Stefan Am 22.11.2016 um 10:15 schrieb Stefan Priebe - Profihost AG: > Hi, > > in the past / with V3 i was able to move qemu-server VM config files > around simply with mv. > > Under v4 it seems this no longer works the file automagically move to &g

[pve-devel] since upgrade v3 => v4 moving qemu-server config files do not work

2016-11-22 Thread Stefan Priebe - Profihost AG
Hi, in the past / with V3 i was able to move qemu-server VM config files around simply with mv. Under v4 it seems this no longer works the file automagically move to their old location. Here an example: [node4 ~]# for VM in $(ps aux|grep "kvm"|grep -- "-id"|sed -e "s/.*-id //" -e "s/ .*//"); do

Re: [pve-devel] [PATCH] VZDump: die with error if plugin loading fails

2016-11-16 Thread Stefan Priebe - Profihost AG
Am 17.11.2016 um 08:42 schrieb Fabian Grünbichler: > On Thu, Nov 17, 2016 at 07:01:24AM +0100, Dietmar Maurer wrote: >>> It is really hard to review patches without descriptions. Please >>> can you add minimal information? >> >> Oh, just saw you sent that in a separate mail - please ignore me! >

Re: [pve-devel] [PATCH] RBD: snap purge does not support automatic unprotect so list all snapshots and then unprotect and delete them

2016-11-16 Thread Stefan Priebe - Profihost AG
Am 17.11.2016 um 07:33 schrieb Dietmar Maurer: > AFAIK we only protect base volumes, and we 'unprotect' that in the code. > So what is the purpose of this patch? > good question ;-) I just remember that i had this situation where i did clones from snapshots which results in protected snapshots.

Re: [pve-devel] VZDump/QemuServer: set bless clas correctly

2016-11-16 Thread Stefan Priebe - Profihost AG
Am 17.11.2016 um 07:20 schrieb Dietmar Maurer: >> While blessing it is good practise to provide the class. This makes it also >> possible to use >> QemuServer as a base / parent class. > > Why do you want another class (QemuServer) as base? We've a custom class PHQemuServer which has

Re: [pve-devel] [PATCH] allow --allow-shrink on RBD resize

2016-11-16 Thread Stefan Priebe - Profihost AG
, ext4, btrfs, ... So there are several cases where you want to shrink a volume. Without a downtime of the server. Greets, Stefan > >> On November 16, 2016 at 8:13 PM Stefan Priebe <s.pri...@profihost.ag> wrote: >> >> >> Signed-off-by: Stefan Priebe <s.p

Re: [pve-devel] [PATCH] allow --allow-shrink on RBD resize

2016-11-16 Thread Stefan Priebe - Profihost AG
h.com/issues/15991 > https://github.com/ceph/ceph/pull/9408 Both are closed or marked as refused so i don't think this will change in the future. Greets, Stefan > - Mail original ----- > De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag> > À: "pv

[pve-devel] VZDump: die with error if plugin loading fails

2016-11-16 Thread Stefan Priebe
This took me some time that a custom modification was preventing a whole plugin to fail loading. The warn also hides in the systemctl status -l / journal log. I think dying is better if a plugin contains an error. [PATCH] VZDump: die with error if plugin loading fails

  1   2   3   4   5   6   7   8   9   10   >