Re: [PVE-User] pvelocalhost

2020-02-22 Thread Stefan M. Radman via pve-user
mailto:prox...@elchaka.de> wrote: IIRC this was necessary in pve4/(5?) but not in 6 anymore. Hth Mehmet Am 19. Februar 2020 12:55:33 MEZ schrieb "Stefan M. Radman via pve-user" mailto:pve-user@pve.proxmox.com>>: ___ pve-user mailing list pve-

Re: [PVE-User] IPv6 disabled - status update error: iptables_restore_cmdlist

2020-02-21 Thread Stefan M. Radman via pve-user
--- Begin Message --- You're right. I was mistakenly checking a CentOS VM instead of the PVE host. Sorry. On Feb 21, 2020, at 15:15, Demetri A. Mkobaranov mailto:damkobara...@gmail.com>> wrote: It seems like that path might work for RPM based distros. I'm on Debian. CONFIDENTIALITY NOTICE:

Re: [PVE-User] IPv6 disabled - status update error: iptables_restore_cmdlist

2020-02-20 Thread Stefan M. Radman via pve-user
--- Begin Message --- Try /etc/sysconfig/ip6tables Stefan # fgrep -A1 /etc/sysconfig/ip6tables /etc/sysconfig/ip6tables-config # Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets stopped # (e.g. on system shutdown). -- # Saves all firewall rules to /etc/sysconfig/ip6tables if

Re: [PVE-User] pvelocalhost

2020-02-19 Thread Stefan M. Radman via pve-user
--- Begin Message --- Thanks. That makes cluster hostfile maintenance much easier. > On Feb 19, 2020, at 14:30, Mira Limbeck wrote: > > It is neither used nor part of newer installations (5.3+ I think?). You can > remove it. > CONFIDENTIALITY NOTICE: This communication may contain privileged

[PVE-User] pvelocalhost

2020-02-19 Thread Stefan M. Radman via pve-user
--- Begin Message --- What is the pvelocalhost alias in /etc/hosts good for? Is it still used in PVE6? Is it mandatory? If mandatory, can I use the loopback address as seen below? root@proxmox:~# head -1 /etc/hosts 127.0.0.1 localhost.localdomain localhost pvelocalhost Thanks Stefan

[PVE-User] PVE 6.1 upgrade and iSCSI

2020-02-04 Thread Stefan M. Radman via pve-user
--- Begin Message --- Just wanted to share an experience made during a recent upgrade to 6.1 of a node that uses iSCSI storage. The upgrade ultimately failed because dpkg refused to upgrade the open-iscsi package due to a failure in the postinst script of the open-iscsi package. See the

Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI

2020-02-03 Thread Stefan M. Radman via pve-user
pdate. Humberto ________ De: "Stefan M. Radman via pve-user" mailto:pve-user@pve.proxmox.com>> Para: "nada" mailto:n...@verdnatura.es>>, "PVE User List" mailto:pve-user@pve.proxmox.com>> Cc: "Stefan M. Radman" mailto:s...@kmi.com&g

Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI

2020-02-03 Thread Stefan M. Radman via pve-user
(see above). It seems to be a timing problem after all. Stefan > On Feb 3, 2020, at 11:20, Marco Gaiarin wrote: > > Mandi! Stefan M. Radman via pve-user > In chel di` si favelave... > >> My own workaround for the issue at hand is to delay the start of lvm2-pvscan >>

Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI

2020-02-03 Thread Stefan M. Radman via pve-user
d Nada On 2020-02-02 23:13, Stefan M. Radman wrote: After upgrading a node that uses iSCSI multipath to 6.1, auto activation of LVM volumes fails for me as well. I am seeing exactly the same problem as originally reported by Nada. Apparently, LVM scans the iSCSI adapters before multipath is fully

Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI

2020-02-02 Thread Stefan M. Radman via pve-user
--- Begin Message --- After upgrading a node that uses iSCSI multipath to 6.1, auto activation of LVM volumes fails for me as well. I am seeing exactly the same problem as originally reported by Nada. Apparently, LVM scans the iSCSI adapters before multipath is fully loaded and consequently

Re: [PVE-User] PVE 6.1 incorrect MTU

2020-02-02 Thread Stefan M. Radman via pve-user
way 172.21.54.254 bridge_ports bond0 bridge_stp off bridge_fd 0 mtu 1500 post-up ip link set dev bond0 mtu 9000 && ip link set dev vmbr0 mtu 1500 #PRIVATE - VLAN 682 - Native On Jan 22, 2020, at 01:12, Stefan M. Radman mailto:s...@kmi.com>> wrote: Hi Ronny Thanks for the input

Re: [PVE-User] PVE 6.1 incorrect MTU

2020-01-21 Thread Stefan M. Radman via pve-user
--- Begin Message --- Hi Ronny Thanks for the input. setting mtu 1500 on vmbr0 may propagate to member interfaces, in more recent kernels. I belive member ports need to have the same mtu as the bridge Hmm. That might be the point with the native bond0 interface. Can you refer me to the place

Re: [PVE-User] PVE 6.1 incorrect MTU

2020-01-21 Thread Stefan M. Radman via pve-user
tive-backup > post-up ip link set bond2 mtu 9000 > ... > > As you can see, the MTU is set both at interface and at bond level. > I don't know if this is a hard requirement but at least it's working. > > Flo > > Am Dienstag, den 21.01.2020, 09:17 +0100 schrieb Ronny Aa

[PVE-User] PVE 6.1 incorrect MTU

2020-01-20 Thread Stefan M. Radman via pve-user
--- Begin Message --- Recently I upgraded the first node of a 5.4 cluster to 6.1. Everything went smooth (thanks for pve5to6!) until I restarted the first node after the upgrade and started to get weird problems with the LVM/multipath/iSCSI based storage (hung PVE and LVM processes etc). After

Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI

2020-01-15 Thread Stefan M. Radman via pve-user
only have one WWID. For two WWIDs you'll need two multiparty subsections. https://help.ubuntu.com/lts/serverguide/multipath-dm-multipath-config-file.html#multipath-config-multipath Stefan On Jan 15, 2020, at 10:55, nada mailto:n...@verdnatura.es>> wrote: On 2020-01-14 19:46, Stefan M. Rad

Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI

2020-01-14 Thread Stefan M. Radman via pve-user
--- Begin Message --- Hi Nada What's the output of "systemctl --failed" and "systemctl status lvm2-activation-net.service". Stefan > On Jan 14, 2020, at 16:40, nada wrote: > > dont'worry and be happy Marco > that rc.local save the situation (temporal solution ;-) > and CTs which have

Re: [PVE-User] NFSv4.1 multipath available?

2019-04-27 Thread Stefan M. Radman via pve-user
, n...@verdnatura.es wrote: > > El 2019-04-27 10:30, Stefan M. Radman via pve-user escribió: >> ___ >> pve-user mailing list >> pve-user@pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >

Re: [PVE-User] NFSv4.1 multipath available?

2019-04-27 Thread Stefan M. Radman via pve-user
--- Begin Message --- Nada You can't connect an NFS server to Proxmox via the iSCSI protocol. The two protocols are fundamentally different from each other. Stefan > On Apr 27, 2019, at 11:08, n...@verdnatura.es wrote: > > hi Gerald > just connect your NFS server to proxmox via iSCSI > see

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-17 Thread Stefan M. Radman
a lot for your suggestions! Cheers Eneko El 5/12/18 a las 13:52, Stefan M. Radman escribió: Hi Eneko Even if PVE does it by default it does not mean that it is the best thing to do. The default configuration of PVE is a starting point. https://pve.proxmox.com/wiki/Separate_Cluster_Network Se

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-05 Thread Stefan M. Radman
cument quoted above. Don't use vmbr0 for cluster traffic. Don't use any vmbr for cluster traffic. Stefan On Dec 5, 2018, at 13:34, Eneko Lacunza mailto:elacu...@binovo.es>> wrote: Hi Stefan, El 4/12/18 a las 23:50, Stefan M. Radman escribió: Don't put your corosync traffic on bridg

Re: [PVE-User] PVE Cluster and iSCSI

2018-11-28 Thread Stefan M. Radman
I am running a 3 node PVE cluster connected to shared storage with LVM. Two of the nodes are connected via 2x4GFC (direct-attach) and one via 2x1GbE iSCSI (switched). The only issue I experienced in the past was a failure to activate LVM volumes after a reboot on the iSCSI node.

Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-23 Thread Stefan M. Radman
Hi Ronny That's the first time I hear of a routing protocol in the corosync context. Doesn't that add a whole lot of complexity in the setup? Would it work with corosync multicast? Stefan > On Nov 23, 2018, at 12:00 PM, Ronny Aasen wrote: > > Personally if i was to try and experiment with

Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-23 Thread Stefan M. Radman
You might want to have a look at https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server This is what (I think) Thomas Lamprecht is referring to and it should also be usable for corosync. The advantage of this configuration over the bridged solution used by Uwe Sauter is the zero

[PVE-User] CIFS VFS: ioctl error in smb2_get_dfs_refer rc=-5

2018-06-14 Thread Stefan M. Radman
The consoles of my proxmox nodes keep showing the error CIFS VFS: ioctl error in smb2_get_dfs_refer rc=-5 Apparently, the QNAP NAS serving shares to the cluster via CIFS, does not support DFS. This should not be flagged as an error and was recently fixed in kernel 4.17.

Re: [PVE-User] Update a cluster from 5.1-46 to 5.2

2018-05-24 Thread Stefan M. Radman
Hi MJ Wouldn't it be useful to start a new thread then? Stefan On May 24, 2018, at 1:44 PM, mj > wrote: Having now read that page, I have a new question. :-) NOTICE OF CONFIDENTIALITY: This communication may

Re: [PVE-User] Update a cluster from 5.1-46 to 5.2

2018-05-24 Thread Stefan M. Radman
5/2018 10:17, Stefan M. Radman ha scritto: Hi Gregor The update instructions are here: https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2 The instructions mention the enterprise repository, will it work also for the no-subscription one? Regards Simo

Re: [PVE-User] Update a cluster from 5.1-46 to 5.2

2018-05-24 Thread Stefan M. Radman
Hi Gregor The update instructions are here: https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2 Turning off the VMs is not necessary. Just live-migrate the virtual machines away from the node you plan to upgrade and move them back afterwards.