[PVE-User] looking for recommendations of VLAN setup

2017-02-02 Thread Uwe Sauter
Hi all, I would like to hear recommendations regarding the network setup of a Proxmox cluster. The situation is the following: * Proxmox hosts have several ethernet links * multiple VLANs are used in our datacenter * I cannot guarantee that the VLANs are on the same interface for each host

Re: [PVE-User] High Availaility / Fencing in Proxmox VE 4.x

2017-02-06 Thread Uwe Sauter
Hi Kevin, thanks for explaining your setup. Comments below. Am 06.02.2017 um 12:57 schrieb Kevin Lemonnier: >> * How does fencing work in Proxmox (technically)? >> Due to fencing being based on watchdogs I assume that some piece of >> software >> regularly resets the watchdog's clock so

[PVE-User] High Availaility / Fencing in Proxmox VE 4.x

2017-02-06 Thread Uwe Sauter
Dear all, I'm a bit confused by the Wiki pages regarding high availability [1] & [2]. I would appreciate if my questions could be answered. (Searched the mailing list archive but didn't find threads regarding HA that are new enough to cover v4.x) * When does a Proxmox cluster become a HA

Re: [PVE-User] looking for recommendations of VLAN setup

2017-02-06 Thread Uwe Sauter
Hi Alwin, thanks for your suggestion. Comments below. Am 04.02.2017 um 12:04 schrieb Alwin Antreich: […] >> >> What kind of network setup would you recommend? > > We also use multiple VLANs on our network. As linux bridges are > VLAN-aware (bridge-vlan-aware yes), we set the VLAN in the VM

Re: [PVE-User] Using OpenVswitch without ethX-named devices

2017-02-20 Thread Uwe Sauter
If I may add another question: how are you planning to handle those dynamic interface names that was introduced a few years ago? See https://en.wikipedia.org/wiki/Consistent_Network_Device_Naming Regards, Uwe Am 20.02.2017 um 20:58 schrieb Sten Aus: > Hi! Thanks for fast response! > >

Re: [PVE-User] Web GUI: connection reset by peer (596)

2017-02-25 Thread Uwe Sauter
difficulties in cluster communication.Have a > look these notes: > > https://pve.proxmox.com/wiki/Multicast_notes > > > > On Fri, 24 Feb 2017 at 22:45, Uwe Sauter <uwe.sauter...@gmail.com > <mailto:uwe.sauter...@gmail.com>> wrote: > > Hi, >

Re: [PVE-User] Web GUI: connection reset by peer (596)

2017-02-25 Thread Uwe Sauter
arting the service on all nodes restored full operation of the web GUI. Perhaps someone from Proxmox could add this piece of knowledge to the wiki? Regards, Uwe Am 25.02.2017 um 09:23 schrieb Uwe Sauter: > I'm sorry I forgot to mention that I already switched to "transport: ud

[PVE-User] Web GUI: connection reset by peer (596)

2017-02-24 Thread Uwe Sauter
Hi, I have a GUI problem with a four node cluster that I installed recently. I was able to follow this up to ext-all.js but I'm no web developer so this is where I got stuck. Background: * four node cluster * each node has two interfaces in use ** eth0 is 1Gb used for management and some VM

Re: [PVE-User] Web GUI: connection reset by peer (596)

2017-02-24 Thread Uwe Sauter
s > > in every nodes??? > > 2017-02-24 15:04 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com > <mailto:uwe.sauter...@gmail.com>>: > > Hi, > > I have a GUI problem with a four node cluster that I installed recently. > I was able >

[PVE-User] PVE and NAT mode

2017-02-28 Thread Uwe Sauter
Hi, I'm trying to use NAT in one of my VMs as I have no official IP address for it. I found [1] which explains how to setup masquerading but I'm a bit confused. [1] uses 10.10.10.0/24 as source address. In the PVE documentation [2] it is mentioned that PVE will serve addresses in the

Re: [PVE-User] VLANs

2017-02-28 Thread Uwe Sauter
I have a setup where I don't use Proxmox own VLAN management but have one bridge per VLAN that I use: /etc/network/interfaces ### auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.253.200 netmask 255.255.255.0 gateway 192.168.253.254 auto

Re: [PVE-User] Broken cluster

2017-03-14 Thread Uwe Sauter
Check that there are no firewalls blocking communication. I had a problem like this a couple of weeks ago and all I needed was to properly configure the settings for pveproxy. (There are other firewall settings, too.) Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier: Looks like they can't find

Re: [PVE-User] Broken cluster

2017-03-14 Thread Uwe Sauter
schrieb Uwe Sauter: Check that there are no firewalls blocking communication. I had a problem like this a couple of weeks ago and all I needed was to properly configure the settings for pveproxy. (There are other firewall settings, too.) Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier: Looks

[PVE-User] Update stuck

2017-03-13 Thread Uwe Sauter
Hi, I was installing the latest updates to PVE 4.4 yesterday and it got stuck after the configuration step for Ceph. I was able to trace this to a process "systemd-tty-ask-password-agent --watch" while systemd was restarting ceph.target. It seems that systemd confused its internal state

Re: [PVE-User] Ceph Harddisk

2017-03-02 Thread Uwe Sauter
Yes, you can add arbitrary sized disks to Ceph. Usually the disk size is used as the OSD's weight factor which influences the placement of data. Am 2. März 2017 22:49:23 MEZ schrieb Daniel : >Hi there, > >i am playing abit with Cepg since some weeks and i just wanted to

Re: [PVE-User] Ceph Harddisk

2017-03-03 Thread Uwe Sauter
to either the SSD or the HDD, as they are run as > independent strorage systems. > > _.https://eXtremeSHOK.com > .__ > > On 03/03/2017 12:05 AM, Uwe Sauter wrote: >> Yes, you can add arbitrary sized disks

Re: [PVE-User] VLANs

2017-02-28 Thread Uwe Sauter
> 6- Configure your switch (both ports AND bond) to your VLANS, all tagged. > > 7- Reboot. > > In your Network settings page, you should see only OVS elements (+ the two > eths of the bond as Network Devices). > > You can assign IPs directly to vmbrs when you don't need ot

Re: [PVE-User] PVE and NAT mode

2017-02-28 Thread Uwe Sauter
Hi Yannick, I'll give it a try tomorrow. Thanks for the suggestion. Regards, Uwe Am 28.02.2017 um 19:45 schrieb Yannick Palanque: > Hello, > > À 2017-02-28T13:20:24+0100, > Uwe Sauter <uwe.sauter...@gmail.com> écrivit : > >> Hi, >> >> I'm tryin

Re: [PVE-User] virtio-9p-pci is not a valid device model name, since yesterday

2017-02-28 Thread Uwe Sauter
Hi, I'd like to make you aware of a security flaw in virtfs [1] that was published about 2 weeks ago. Might be worth while to get this into the coming update if this applies to PVE. Regards, Uwe [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=1035=6= Am 27.02.2017 um

Re: [PVE-User] Cluster won't reform so I can't restart VMs

2017-08-11 Thread Uwe Sauter
If it is a multicast problem and your cluster is not that big (~10 nodes) you could switch to using "udpu" in corosync.conf totem { […] config_version: +=1 # increment with every change you do transport: udpu } Am 11.08.2017 um 13:48 schrieb Alexandre DERUMIER: > seem to be a

Re: [PVE-User] Ceph server with Ubuntu

2017-08-14 Thread Uwe Sauter
Are there any reasons on your side to use Ubuntu? If you want to stay compatible you could also install Proxmox including Ceph but not use those hosts for virtualization… Am 14.08.2017 um 14:07 schrieb Gilberto Nunes: > Hi > > Regard Ceph, can I use 3 Ubuntu Server 16 Xenial to build a Ceph

Re: [PVE-User] Ceph server with Ubuntu

2017-08-14 Thread Uwe Sauter
-14 9:30 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com > <mailto:uwe.sauter...@gmail.com>>: > > Are there any reasons on your side to use Ubuntu? If you want to stay > compatible you could also install Proxmox including Ceph but > not use those hosts for virtu

Re: [PVE-User] Ceph server with Ubuntu

2017-08-14 Thread Uwe Sauter
t; > > 2017-08-14 9:51 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com > <mailto:uwe.sauter...@gmail.com>>: > > Then the question is if > > a) you'd want to integrate those Ubuntu servers into an existing Ceph > cluster (managed by Proxmox) or > &

Re: [PVE-User] qm migrate: strange output

2017-07-07 Thread Uwe Sauter
-1~bpo80+1 Am 07.07.2017 um 17:38 schrieb Nicola Ferrari (#554252): > On 20/06/2017 18:19, Uwe Sauter wrote: >> >> Can someone explain under which circumstances this output is displayed >> instead of just the short message that migration >> was started? > > I

Re: [PVE-User] qm migrate: strange output

2017-07-10 Thread Uwe Sauter
Ah, thanks. (Sorry for the late reply, Gmail put you answer into the spam folder.) Am 20.06.2017 um 19:01 schrieb Michael Rasmussen: > The former is for HA vm's the latter for non HA vm's > > On June 20, 2017 6:19:36 PM GMT+02:00, Uwe Sauter <http://uwe.sauter.de>@g

[PVE-User] Automatic migration before reboot / shutdown? Migration to host in same group?

2017-07-06 Thread Uwe Sauter
Hi all, 1) I was wondering how a PVE (4.4) cluster will behave when one of the nodes is restarted / shutdown either via WebGUI or via commandline. Will hosted, HA-managed VMs be migrated to other hosts before shutting down or will they be stopped (and restared on another host once HA recognizes

Re: [PVE-User] Automatic migration before reboot / shutdown? Migration to host in same group?

2017-07-06 Thread Uwe Sauter
Hi Thomas, thank you for your insight. >> 1) I was wondering how a PVE (4.4) cluster will behave when one of the nodes >> is restarted / shutdown either via WebGUI or via >> commandline. Will hosted, HA-managed VMs be migrated to other hosts before >> shutting down or will they be stopped

Re: [PVE-User] Automatic migration before reboot / shutdown? Migration to host in same group?

2017-07-06 Thread Uwe Sauter
Thomas, >>> An idea is to allow the configuration of the behavior and add two >>> additional behaviors, >>> i.e. migrate away and relocate away. >> What's the difference between migration and relocation? Temporary vs. >> permanent? > > Migration does an online migration if possible (=on VMs)

Re: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120)

2017-08-22 Thread Uwe Sauter
An example out of my head: /etc/network/interfaces - # management interface auto eth0 iface eth0 inet static address 10.100.100.8 netmask 255.255.255.0 gateway 10.100.100.254 # 1st interface in bond auto eth1 iface eth1 inet manual mtu 9000 # 2nd

Re: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120)

2017-08-22 Thread Uwe Sauter
on bond with IP auto bond0.120 iface bond0.120 inet static address 10.100.100.8 netmask 255.255.255.0 mtu 9000 # interface for vlan 130 on bond without IP (just for VMs) auto bond0.130 iface bond0.130 inet manual - Am 23.08.2017 um 07:27 schrieb Uwe Sauter

Re: [PVE-User] Ceph Luminous

2017-08-17 Thread Uwe Sauter
https://pve.proxmox.com/wiki/Ceph_Server#Ceph_on_Proxmox_VE_5.0 "Note: the current Ceph Luminous 12.1.x is the release candidate, for production ready Ceph Cluster packages please wait for version 12.2.x " Am 17.08.2017 um 16:58 schrieb Gilberto Nunes: > Hi guys > > Ceph Luminous is

Re: [PVE-User] PVE behind reverse proxy: different webroot possible?

2017-05-11 Thread Uwe Sauter
sistet (compare lines 4296 and 33465). This is the reason why I have 2 sub_filters for basically the same replacement. Am 09.05.2017 um 11:01 schrieb Thomas Lamprecht: > Hi, > > On 05/05/2017 06:18 PM, Uwe Sauter wrote: >> Hi, >> >> I've seen the wiki page [1] that ex

Re: [PVE-User] PVE behind reverse proxy: different webroot possible?

2017-05-09 Thread Uwe Sauter
Hi Thomas, thank you for the effort of explaining. > > Hmm, there are some problems as we mostly set absolute paths on resources > (images, JS and CSS files) > so the loading fails... > I.e., pve does not knows that it is accessed from > https://example.com/pve-node/ and tries to load the

[PVE-User] Backup to Ceph / NFSv4

2017-05-18 Thread Uwe Sauter
Hi, as my Proxmox hosts don't have enough local storage I wanted to do backups into the "network". One option that came into mind was using the existing Ceph installation to do backups. What's currently missing for that (as far as I can tell) is Proxmox support for a Ceph-backed filesystem

[PVE-User] qm migrate: strange output

2017-06-20 Thread Uwe Sauter
Hi all, usually when I update my PVE cluster I do it in a rolling fashion: 1) empty one node from running VMs 2) update & reboot that node 3) go to next node 4) migrate all running VMs to already updated node 5) go to 2 until no more nodes need update For step 1 (or 4) I usually do: # qm list

[PVE-User] Problems with backup process and NFS

2017-05-19 Thread Uwe Sauter
Hi all, after having succeeded to have an almost TCP-based NFS share mounted (see yesterday's thread) I'm now struggling with the backup process itself. Definition of NFS share in /etc/pve/storage.cfg is: nfs: aurel export /backup/proxmox-infra path /mnt/pve/aurel

Re: [PVE-User] Problems with backup process and NFS

2017-05-19 Thread Uwe Sauter
Hi Fabian, thanks for looking into this. As I already mentioned yesterday my NFS setup tries to use TCP as much as possible so the only UDP port used / allowed in the NFS servers firewall is udp/111 for Portmapper (to allow showmount to work). >> Issue 1: >> Backups failed tonight with "Error:

Re: [PVE-User] Problems with backup process and NFS

2017-05-19 Thread Uwe Sauter
Am 19.05.2017 um 11:53 schrieb Fabian Grünbichler: > On Fri, May 19, 2017 at 11:26:35AM +0200, Uwe Sauter wrote: >> Hi Fabian, >> >> thanks for looking into this. >> >> As I already mentioned yesterday my NFS setup tries to use TCP as much as >> possi

Re: [PVE-User] Backup to Ceph / NFSv4

2017-05-18 Thread Uwe Sauter
Am 18.05.2017 um 15:04 schrieb Emmanuel Kasper: > > > On 05/18/2017 02:56 PM, Uwe Sauter wrote: >> # mount -t nfs -o vers=4,rw,sync :$SHARE /mnt >> mount.nfs: mounting aurel:/proxmox-infra failed, reason given by server: No >> such file or directory > > aurel:

Re: [PVE-User] Problems with backup process and NFS

2017-05-22 Thread Uwe Sauter
>>> perl -e 'use strict; use warnings; use PVE::ProcFSTools; use Data::Dumper; >>> print Dumper(PVE::ProcFSTools::parse_proc_mounts());' >>> >> >> $VAR1 = [ >> >> [ >> ':/backup/proxmox-infra', >> '/mnt/pve/aurel', >> 'nfs', >> >>

Re: [PVE-User] Problems with backup process and NFS

2017-05-22 Thread Uwe Sauter
>> >> the culprit is likely that your storage.cfg contains the IP, but your >> /proc/mounts contains the hostname (with a reverse lookup inbetween?). >> > > I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid > DNS lookup delays, it is usually preferable to use an > IP

Re: [PVE-User] Problems with backup process and NFS

2017-05-22 Thread Uwe Sauter
Am 22.05.2017 um 15:40 schrieb Uwe Sauter: > >> >> I discovered a different issue with this definition: If I go to Datacenter >> -> node -> storage aurel -> content I only get "mount >> error: mount.nfs: /mnt/pve/aurel is busy or already mounted (500)&q

Re: [PVE-User] Problems with backup process and NFS

2017-05-22 Thread Uwe Sauter
> > I discovered a different issue with this definition: If I go to Datacenter -> > node -> storage aurel -> content I only get "mount > error: mount.nfs: /mnt/pve/aurel is busy or already mounted (500)". > > The share is mounted again with IP address though I didn't change the config > after

Re: [PVE-User] sbin/unconfigured.sh

2017-05-18 Thread Uwe Sauter
l by running > unconfigured.sh. > > So basically, if I boot to the shell, how can I start the install from the > contents of the CD/ISO. > > > > On 18 May 2017 at 19:04, Uwe Sauter <uwe.sauter...@gmail.com > <mailto:uwe.sauter...@gmail.com>> wrote: > >

Re: [PVE-User] sbin/unconfigured.sh

2017-05-18 Thread Uwe Sauter
Don't know what your situation is but there is a wiki page [1] that describes the installation of Proxmox on top of an existing Debian. [1] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie Am 18.05.2017 um 19:55 schrieb Steve: > In version 3.2 ISO there was this script to start

Re: [PVE-User] Backup to Ceph / NFSv4

2017-05-18 Thread Uwe Sauter
but again, due to showmount not using TCP PVE will not mount it automatically. Regards, Uwe Am 18.05.2017 um 11:40 schrieb Uwe Sauter: > Hi, > > as my Proxmox hosts don't have enough local storage I wanted to do backups > into the "network". One option

Re: [PVE-User] Problems with backup process and NFS

2017-05-23 Thread Uwe Sauter
Hi Fabian, >> I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid >> DNS lookup delays, it is usually preferable to use an >> IP address instead of a DNS name". But yes, the DNS in our environment is >> configured to allow reverse lookups. > > which - AFAIK - is still

[PVE-User] WebUI - Ceph OSD page - remove vs. destroy

2017-05-16 Thread Uwe Sauter
Hi, I just noticed an (intentional?) inconsistency between the WebUI's Ceph OSD page vs. the tasks view on the bottom and the CLI: If you go to Datacenter -> node -> Ceph -> OSD and select one of the OSDs you can "remove" it with a button in the upper right corner. If you do so the task is

[PVE-User] PVE behind reverse proxy: different webroot possible?

2017-05-05 Thread Uwe Sauter
Hi, I've seen the wiki page [1] that explains how to operate a PVE host behind a reverse proxy. I'm currently in the situation that I have several services already behind a rev proxy that are accessible with different webroots, e.g. https://example.com/dashboard https://example.com/owncloud

[PVE-User] Snapshot size

2017-09-18 Thread Uwe Sauter
Hi, suppose I have several snapshots of a VM: Snap1 └── Snap2 └── Snap3 └── Snap4 └── Snap5 Is there a way to determine the size of each snapshot? Regards, Uwe ___ pve-user mailing list pve-user@pve.proxmox.com

Re: [PVE-User] Snapshot size

2017-09-21 Thread Uwe Sauter
M           12636M  > > > On Thu, Sep 21, 2017 at 8:30 AM, Uwe Sauter <uwe.sauter...@gmail.com > <mailto:uwe.sauter...@gmail.com>> wrote: > > Hi, > > thanks, but I forgot to mention that all my VMs have Ceph as backend and > thus snapshots can'

[PVE-User] Disk recognition order

2017-08-29 Thread Uwe Sauter
Hi, I'm currently facing the following problem: VM is defined with several disks: scsi0 -> ceph:vm-201-disk1,discard=on, size=16G scsi1 -> ceph:vm-201-disk2,discard=on, size=16G scsi2 -> ceph:vm-201-disk3,discard=on, size=4G scsi3 -> ceph:vm-201-disk4,discard=on, size=4G scsi4 ->

Re: [PVE-User] Disk recognition order

2017-08-29 Thread Uwe Sauter
) to the OS. Having a look at the dmesg output it seems to be a timing issue: highest LUN is recognized as first. Am 29.08.2017 um 13:30 schrieb Lindsay Mathieson: > On 29/08/2017 9:17 PM, Uwe Sauter wrote: >> Is there any way to force scsi1 to /dev/sdb, scsi2 to /dev/sdc, etc. so that >

[PVE-User] Backup process starts VM??

2017-11-17 Thread Uwe Sauter
Hi all, I'm a bit shocked. I wanted to create a "save" backup where the VM is shut down and thus all filesystems are in a consistent state. For that I shut down my VM and then started a backup (backup mode=stop, compression=lzo) and what must I see: INFO: starting new backup job: vzdump 106

Re: [PVE-User] Backup process starts VM??

2017-11-19 Thread Uwe Sauter
Thanks for clarification! Am 19.11.2017 um 09:11 schrieb Dietmar Maurer: >>> Could someone with insight into the backup process explain why kvm is >>> started? >> >> It uses the qemu copy-on-write feature to make sure the state is consistent. >> You can immediately work with that VM, while qemu

[PVE-User] Wiki regarding Ceph OSD tunables still correct?

2017-11-03 Thread Uwe Sauter
Hi, is it still correct to set tunables to "hammer" even whit Proxmox 5? This is mentioned in the wiki [1]. Regards, Uwe [1] https://pve.proxmox.com/wiki/Ceph_Server#Set_the_Ceph_OSD_tunables ___ pve-user mailing list

[PVE-User] PVE 5.1 pveperf not working correctly

2017-11-04 Thread Uwe Sauter
Hi running a cluster with PVE 5.1 and Ceph. pveperf as described in [1] doesn't work anymore. Even as root I get: root@pxmx-02:~# pveperf help CPU BOGOMIPS: 89368.48 REGEX/SECOND: 1505926 df: help: No such file or directory DNS EXT: 13.68 ms DNS INT: 19.98 ms

Re: [PVE-User] PVE 5.1 pveperf not working correctly

2017-11-04 Thread Uwe Sauter
True, my bad. But every other PVE related command I used so far had a " help" subcommand so I didn't look into the man page. Please take this than as bug report for the subcommand (or a "-h" help option) and as a request to update the wiki article to include the info, that a PATH argument can

[PVE-User] Upgrade 4.4 to 5.1

2017-10-25 Thread Uwe Sauter
Hi, now that 5.1 is released will there be documentation how to upgrade from 4.4? Is the wikie page [1] valid for 5.1? Did someone already try the upgrade? Any experience is appreciated. Regards, Uwe [1] https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0

[PVE-User] I/O performance regression with kernel 4.15.x, ceph 12.2.x

2018-05-09 Thread Uwe Sauter
Hi, since kernel 4.15.x was released in pve-nosubscription I have I/O performance regressions that lead to 100% iowait in VMs, dropped (audit) log records and instability in general. All VMs that present this behavior run up-to-date CentOS 7 on Ceph-backed storage with kvm64 as CPU. This

[PVE-User] Meltdown/Spectre mitigation options / Intel microcode

2018-05-08 Thread Uwe Sauter
Hi all, I recently discovered that one of the updates since turn of the year introduced options to let the VM know about Meltdown/Spectre mitigation on the host (VM configuration -> processors -> advanced -> PCID & SPEC-CTRL). I'm not sure if I understand the documentation correctly so please

Re: [PVE-User] Meltdown/Spectre mitigation options / Intel microcode

2018-05-08 Thread Uwe Sauter
as I can tell. > > On Tue, May 08, 2018 at 03:31:52PM +0200, Uwe Sauter wrote: >> Hi all, >> >> I recently discovered that one of the updates since turn of the year >> introduced options to let the VM know about Meltdown/Spectre >> mitigation on the host (V

Re: [PVE-User] I/O performance regression with kernel 4.15.x, ceph 12.2.x

2018-05-16 Thread Uwe Sauter
g on 4.13.16 then no blocking OSDs happen (as far as I have seen until now). Has anyone seen repeatedly OSDs with blocked requests when running 4.15.17 or is it just me? Regards, Uwe Am 09.05.2018 um 11:51 schrieb Uwe Sauter: > Hi, > > since kernel 4.15.x was released in pve-nos

[PVE-User] Hanging storage tasks in all RH based VMs after update

2018-05-02 Thread Uwe Sauter
Hi all, I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.) Since then I experience hanging storage tasks inside the VMs (e.g. jbd2 on VMs

Re: [PVE-User] Hanging storage tasks in all RH based VMs after update

2018-05-02 Thread Uwe Sauter
Hi Lindsay, I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.) Whats your ceph status? it probably doing a massive backfill after the

Re: [PVE-User] Hanging storage tasks in all RH based VMs after update

2018-05-02 Thread Uwe Sauter
Mathieson: On 3/05/2018 6:27 AM, Uwe Sauter wrote: I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.) Whats your ceph status? it probably doing

Re: [PVE-User] Hanging storage tasks in all RH based VMs after update

2018-05-03 Thread Uwe Sauter
Looks like this was cause by pve-kernel-4.15.15-1-pve. After rebooting into pve-kernel-4.13.16-2-pve performance is back to normal. Hopefully the next kernel update will address this. Regards, Uwe Am 02.05.2018 um 22:27 schrieb Uwe Sauter: > Hi all, > > I updated m

[PVE-User] Mellanox OFED on Proxmox Host

2017-10-20 Thread Uwe Sauter
Hi, I'm trying to use the virtualization support that Mellanox ConnectX-3 cards provide. In [1] you can find a document by Mellanox that describes the necessary steps for KVM. Currently I'm trying to install Mellanox OFED but the installation fails because there is no package

Re: [PVE-User] Proxmox disable TLS 1

2018-07-26 Thread Uwe Sauter
Am 26.07.2018 um 11:22 schrieb Thomas Lamprecht: > Hi, > > Am 07/26/2018 um 11:05 AM schrieb Brent Clark: >> Good day Guys >> >> I did a sslscan on my proxmox host, and I got the following: >> >> snippet: >> Preferred TLSv1.0  256 bits  ECDHE-RSA-AES256-SHA  Curve P-256 DHE >> 256 >>

Re: [PVE-User] Proxmox disable TLS 1

2018-07-26 Thread Uwe Sauter
y_pass https://localhost:8006; >     proxy_buffering off; >     client_max_body_size 0; >     proxy_connect_timeout  3600s; >     proxy_read_timeout  3600s; >     proxy_send_timeout  3600s; >     send_timeout  3600s; >     } > }

Re: [PVE-User] Proxmox disable TLS 1

2018-07-26 Thread Uwe Sauter
Would you mind to share the relevant parts of your nginx config? Does forwarding NoVNC traffic work? Am 26.07.2018 um 13:22 schrieb Ian Coetzee: > Hi All, > > I know this has been answered. > > What I did was to drop a reverse proxy (nginx) in front of pveproxy > listening on port 443 then

Re: [PVE-User] PVE kernel (sources, build process, etc.)

2018-08-22 Thread Uwe Sauter
>>> * pve-kernel 4.13 is based on http://kernel.ubuntu.com/git/ubuntu/ubuntu-artful.git/ ? >>> >>> Yes. (Note that this may not get much updates anymore) >>> * pve-kernel 4.15 is based on http://kernel.ubuntu.com/git/ubuntu/ubuntu-bionic.git/ ? >>> >>> Yes. We're

Re: [PVE-User] PVE kernel (sources, build process, etc.)

2018-08-22 Thread Uwe Sauter
Hi Thomas, Am 22.08.18 um 09:55 schrieb Thomas Lamprecht: > Hi Uwe, > > On 8/22/18 9:48 AM, Uwe Sauter wrote: >> Hi all, >> >> some quick questions: >> >> * As far as I can tell the PVE kernel is a modified version of Ubuntu >> kernels, correct

[PVE-User] PVE kernel (sources, build process, etc.)

2018-08-22 Thread Uwe Sauter
Hi all, some quick questions: * As far as I can tell the PVE kernel is a modified version of Ubuntu kernels, correct? Modifications can be viewed in the pve-kernel.git repository (https://git.proxmox.com/?p=pve-kernel.git;a=tree). * pve-kernel 4.13 is based on

Re: [PVE-User] PVE kernel (sources, build process, etc.)

2018-08-22 Thread Uwe Sauter
ming knowledge) > > Marcus Haarmann > > ---------- > *Von: *"Uwe Sauter" > *An: *"pve-user" > *Gesendet: *Mittwoch, 22. August 2018 09:4

Re: [PVE-User] PVE kernel (sources, build process, etc.)

2018-08-22 Thread Uwe Sauter
One thing speaks againts this being PTI is that both types of nodes have secondary OSDs causing slow requests. Though it still is an option to try before giving up completely. Am 22.08.18 um 11:45 schrieb Uwe Sauter: > Hi Marcus, > > no, I haven't disabled Spectre/Meltdown mitigat

Re: [PVE-User] PVE kernel (sources, build process, etc.)

2018-08-22 Thread Uwe Sauter
encountered stuck I/O on rdb devices. > And kernel says it is losing a mon connection and hunting for a new mon all > the time (when backup takes > place and heavy I/O is done). > > Marcus Haarmann > > ------

Re: [PVE-User] Confusing about Bond 802.3ad

2018-08-24 Thread Uwe Sauter
If using standard 802.3ad (LACP) you will always get only the performance of a single link between one host and another. Using "bond-xmit-hash-policy layer3+4" might get you a better performance but is not standard LACP. Am 24.08.18 um 12:01 schrieb Gilberto Nunes: > So what bond mode I

Re: [PVE-User] PVE kernel (sources, build process, etc.)

2018-09-10 Thread Uwe Sauter
------- > *Von: *"uwe sauter de" > *An: *"Thomas Lamprecht" , "pve-user" > > *Gesendet: *Mittwoch, 22. August 2018 10:50:19 > *Betreff: *Re: [PVE-User] PVE kernel

Re: [PVE-User] pve-kernel-4.13.13-5-pve hang with high IO

2018-01-24 Thread Uwe Sauter
As long as you microcode is older than June 2017 there is no way that there are mitigations for Meltdown and Spectre as Intel was only made aware of the flaws back in June 2017. Same goes for the BIOS as the vendors require the microcode from Intel to include into their updates. Regarding the

[PVE-User] WARNING: zfs 0.7.7 has regression

2018-04-10 Thread Uwe Sauter
Hi all, I discourage you from updating ZFS to version 0.7.7 as it contains a regression. Version 0.7.8 was released today that reverts the commit that introduced the regression. For Infos check: https://github.com/zfsonlinux/zfs/issues/7401 Regards, Uwe

[PVE-User] Firewall settings for migration type insecure

2018-03-23 Thread Uwe Sauter
Hi there, I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but migrations fail with this setting. # log of failed insecure migration # 2018-03-23 14:58:44 starting migration of VM 101 to node 'px-bravo-cluster' (169.254.42.49) 2018-03-23 14:58:44 copying disk

Re: [PVE-User] Firewall settings for migration type insecure

2018-03-23 Thread Uwe Sauter
Thanks, I'll try again. Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht: > Hi Uwe! > > On 3/23/18 3:02 PM, Uwe Sauter wrote: >> Hi there, >> >> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but >> migrations fail with this

Re: [PVE-User] Firewall settings for migration type insecure

2018-03-23 Thread Uwe Sauter
Ah, syntax. Thanks again. Have a nice weekend. Am 23.03.2018 um 15:35 schrieb Thomas Lamprecht: > Uwe, > > On 3/23/18 3:31 PM, Uwe Sauter wrote: >> a quick follow-up: is it possible to create PVE firewall rules for port >> ranges? It seems that only a single port is allo

Re: [PVE-User] Firewall settings for migration type insecure

2018-03-23 Thread Uwe Sauter
lid format - invalid port '6-60050' Best, Uwe Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht: > Hi Uwe! > > On 3/23/18 3:02 PM, Uwe Sauter wrote: >> Hi there, >> >> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but >&g

[PVE-User] Request for backport of Ceph bugfix from 12.2.9

2018-11-07 Thread Uwe Sauter
Hi, I'm trying to manually migrate VM images with snapshots from pool "vms" to pool "vdisks" but it fails: # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 2 - vdisks/vm-102-disk-2 rbd: import header failed. rbd: import failed: (22) Invalid argument Exporting

Re: [PVE-User] LVM issue

2018-11-08 Thread Uwe Sauter
Hi, first problem is that you seem to be using some client that replaces verbose text with links to facebook. Could you please resend you mail using a plain text message (no html). This should also take care of the formating (currently no monospace font which make it much harder do find the

[PVE-User] Quick question regarding node removal

2018-11-06 Thread Uwe Sauter
Hi, in the documentation to pvecm [1] it says: At this point you must power off hp4 and make sure that it will not power on again (in the network) as it is. Important: As said above, it is critical to power off the node before removal, and make sure that it will never power on again (in the

Re: [PVE-User] Request for backport of Ceph bugfix from 12.2.9

2018-11-08 Thread Uwe Sauter
Hi all, thanks for looking into this. With help from the ceph-users list I was able to migrate my images. So no need anymore. Best, Uwe Am 08.11.18 um 16:38 schrieb Thomas Lamprecht: > On 11/8/18 1:43 PM, Alwin Antreich wrote: >> On Wed, Nov 07, 2018 at 09:01:09PM +0100, U

Re: [PVE-User] Local interface on Promox server

2018-11-25 Thread Uwe Sauter
You could use qm terminal to connect to the serial console. Ctrl + o will quit the session. You need to configure your VMs to provide a serial console, e.g. by adding "console=tty0 console=ttyS0,115200n8" to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and running "grub-mkconfig -o

Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Uwe Sauter
Frank Thommen: Good point.  Thanks a lot frank On 11/22/2018 07:51 PM, Uwe Sauter wrote: FYI: I had such a thing working. What you need to keep in mind is that you should configure both interfaces per host on the same (software) bridge and keep STP on… that way when you loose the link from

Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Uwe Sauter
FYI: I had such a thing working. What you need to keep in mind is that you should configure both interfaces per host on the same (software) bridge and keep STP on… that way when you loose the link from node A to node B the traffic will be going through node C. ++ |

Re: [PVE-User] PVE Firewall Port forwarding...

2019-04-06 Thread Uwe Sauter
And how would you handle the situation where you want to use dport 10200 on several VMs on the same host? I don't think that this will work reliably in a cluster, where VMs migrate between hosts. Am 06.04.19 um 16:25 schrieb Gilberto Nunes: Hi there... Is there any way to use port forward

Re: [PVE-User] Proxmox Ceph Cluster and osd_target_memory

2019-02-19 Thread Uwe Sauter
No in-place way to convert between but you can set one drive at a time to "out" to migrate data away, then to "off" and destroy. Then remove the OSD and recreate with filestore. Let it sync and once finished, do the next drive. Am 19.02.19 um 12:16 schrieb Gilberto Nunes: > Hi > > I have 15

Re: [PVE-User] Intel Corporation Gigabit ET2 Quad Port Server Adapter

2019-04-11 Thread Uwe Sauter
In dmesg output, are there lines like "e1000e :00:19.0 enp0s25: renamed from eth0" ? The question I see is: are all of your interfaces detected but Udev is doing something wrong or does the kernel not detect all interfaces (besides it seems to see all PCIe devices). You could try to create

Re: [PVE-User] Proxmox VE 5.4 released!

2019-04-11 Thread Uwe Sauter
Am 11.04.19 um 16:07 schrieb Thomas Lamprecht: On 4/11/19 2:47 PM, Uwe Sauter wrote: Thanks for all your effort. Two questions though: From the release notes: HA improvements and added flexibility It is now possible to set a datacenter wide HA policy which can change the way guests

Re: [PVE-User] Proxmox VE 5.4 released!

2019-04-11 Thread Uwe Sauter
Thanks for all your effort. Two questions though: From the release notes: HA improvements and added flexibility It is now possible to set a datacenter wide HA policy which can change the way guests are treated upon a Node shutdown or reboot. The choices are: freeze: always freeze

Re: [PVE-User] Cluster mixed hardware behavior (CPUs)

2019-04-26 Thread Uwe Sauter
To be most flexible in a HA setup you would take the minimal denominator on the CPU architecture / feature side. E.g. if you have 5 hosts with SandyBridge CPUs and 5 Hosts with Skylake CPUs you would limit the CPU type to SandyBridge. This enables you to migrate VMs back from a Skylake node to

[PVE-User] Move VM's HDD incl. snapshots from one Ceph to another

2019-08-19 Thread Uwe Sauter
Hi all, is it possible to move a VM's disks from one Ceph cluster to another, including all snapshots that those disks have? The GUI doesn't let me do it but is there some commandline magic that will move the disks and all I have to do is edit the VM's config file? Background: I have two PVE

Re: [PVE-User] Move VM's HDD incl. snapshots from one Ceph to another

2019-08-19 Thread Uwe Sauter
unza, wrote: > >> Hi Uwe, >> >> El 19/8/19 a las 10:14, Uwe Sauter escribió: >>> is it possible to move a VM's disks from one Ceph cluster to another, >> including all snapshots that those disks have? The GUI >>> doesn't let me do it but is there some com

  1   2   >