mailto:prox...@elchaka.de>
wrote:
IIRC this was necessary in pve4/(5?) but not in 6 anymore.
Hth
Mehmet
Am 19. Februar 2020 12:55:33 MEZ schrieb "Stefan M. Radman via pve-user"
mailto:pve-user@pve.proxmox.com>>:
___
pve-user mailing list
pve-
--- Begin Message ---
You're right. I was mistakenly checking a CentOS VM instead of the PVE host.
Sorry.
On Feb 21, 2020, at 15:15, Demetri A. Mkobaranov
mailto:damkobara...@gmail.com>> wrote:
It seems like that path might work for RPM based distros. I'm on Debian.
CONFIDENTIALITY NOTICE:
--- Begin Message ---
Try /etc/sysconfig/ip6tables
Stefan
# fgrep -A1 /etc/sysconfig/ip6tables /etc/sysconfig/ip6tables-config
# Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets stopped
# (e.g. on system shutdown).
--
# Saves all firewall rules to /etc/sysconfig/ip6tables if
--- Begin Message ---
Thanks. That makes cluster hostfile maintenance much easier.
> On Feb 19, 2020, at 14:30, Mira Limbeck wrote:
>
> It is neither used nor part of newer installations (5.3+ I think?). You can
> remove it.
>
CONFIDENTIALITY NOTICE: This communication may contain privileged
--- Begin Message ---
What is the pvelocalhost alias in /etc/hosts good for?
Is it still used in PVE6?
Is it mandatory?
If mandatory, can I use the loopback address as seen below?
root@proxmox:~# head -1 /etc/hosts
127.0.0.1 localhost.localdomain localhost pvelocalhost
Thanks
Stefan
--- Begin Message ---
Just wanted to share an experience made during a recent upgrade to 6.1 of a
node that uses iSCSI storage.
The upgrade ultimately failed because dpkg refused to upgrade the open-iscsi
package due to a failure in the postinst script of the open-iscsi package.
See the
pdate.
Humberto
________
De: "Stefan M. Radman via pve-user"
mailto:pve-user@pve.proxmox.com>>
Para: "nada" mailto:n...@verdnatura.es>>, "PVE User List"
mailto:pve-user@pve.proxmox.com>>
Cc: "Stefan M. Radman" mailto:s...@kmi.com&g
(see above).
It seems to be a timing problem after all.
Stefan
> On Feb 3, 2020, at 11:20, Marco Gaiarin wrote:
>
> Mandi! Stefan M. Radman via pve-user
> In chel di` si favelave...
>
>> My own workaround for the issue at hand is to delay the start of lvm2-pvscan
>>
d
Nada
On 2020-02-02 23:13, Stefan M. Radman wrote:
After upgrading a node that uses iSCSI multipath to 6.1, auto
activation of LVM volumes fails for me as well.
I am seeing exactly the same problem as originally reported by Nada.
Apparently, LVM scans the iSCSI adapters before multipath is fully
--- Begin Message ---
After upgrading a node that uses iSCSI multipath to 6.1, auto activation of LVM
volumes fails for me as well.
I am seeing exactly the same problem as originally reported by Nada.
Apparently, LVM scans the iSCSI adapters before multipath is fully loaded and
consequently
way 172.21.54.254
bridge_ports bond0
bridge_stp off
bridge_fd 0
mtu 1500
post-up ip link set dev bond0 mtu 9000 && ip link set dev vmbr0 mtu 1500
#PRIVATE - VLAN 682 - Native
On Jan 22, 2020, at 01:12, Stefan M. Radman mailto:s...@kmi.com>>
wrote:
Hi Ronny
Thanks for the input
--- Begin Message ---
Hi Ronny
Thanks for the input.
setting mtu 1500 on vmbr0 may propagate to
member interfaces, in more recent kernels. I belive member ports need to
have the same mtu as the bridge
Hmm. That might be the point with the native bond0 interface.
Can you refer me to the place
tive-backup
> post-up ip link set bond2 mtu 9000
> ...
>
> As you can see, the MTU is set both at interface and at bond level.
> I don't know if this is a hard requirement but at least it's working.
>
> Flo
>
> Am Dienstag, den 21.01.2020, 09:17 +0100 schrieb Ronny Aa
--- Begin Message ---
Recently I upgraded the first node of a 5.4 cluster to 6.1.
Everything went smooth (thanks for pve5to6!) until I restarted the first node
after the upgrade and started to get weird problems with the
LVM/multipath/iSCSI based storage (hung PVE and LVM processes etc).
After
only have one WWID. For two WWIDs you'll need two
multiparty subsections.
https://help.ubuntu.com/lts/serverguide/multipath-dm-multipath-config-file.html#multipath-config-multipath
Stefan
On Jan 15, 2020, at 10:55, nada mailto:n...@verdnatura.es>>
wrote:
On 2020-01-14 19:46, Stefan M. Rad
--- Begin Message ---
Hi Nada
What's the output of "systemctl --failed" and "systemctl status
lvm2-activation-net.service".
Stefan
> On Jan 14, 2020, at 16:40, nada wrote:
>
> dont'worry and be happy Marco
> that rc.local save the situation (temporal solution ;-)
> and CTs which have
, n...@verdnatura.es wrote:
>
> El 2019-04-27 10:30, Stefan M. Radman via pve-user escribió:
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
--- Begin Message ---
Nada
You can't connect an NFS server to Proxmox via the iSCSI protocol.
The two protocols are fundamentally different from each other.
Stefan
> On Apr 27, 2019, at 11:08, n...@verdnatura.es wrote:
>
> hi Gerald
> just connect your NFS server to proxmox via iSCSI
> see
a lot for your suggestions!
Cheers
Eneko
El 5/12/18 a las 13:52, Stefan M. Radman escribió:
Hi Eneko
Even if PVE does it by default it does not mean that it is the best thing to do.
The default configuration of PVE is a starting point.
https://pve.proxmox.com/wiki/Separate_Cluster_Network
Se
cument quoted above.
Don't use vmbr0 for cluster traffic.
Don't use any vmbr for cluster traffic.
Stefan
On Dec 5, 2018, at 13:34, Eneko Lacunza
mailto:elacu...@binovo.es>> wrote:
Hi Stefan,
El 4/12/18 a las 23:50, Stefan M. Radman escribió:
Don't put your corosync traffic on bridg
I am running a 3 node PVE cluster connected to shared storage with LVM.
Two of the nodes are connected via 2x4GFC (direct-attach) and one via 2x1GbE
iSCSI (switched).
The only issue I experienced in the past was a failure to activate LVM volumes
after a reboot on the iSCSI node.
Hi Ronny
That's the first time I hear of a routing protocol in the corosync context.
Doesn't that add a whole lot of complexity in the setup?
Would it work with corosync multicast?
Stefan
> On Nov 23, 2018, at 12:00 PM, Ronny Aasen wrote:
>
> Personally if i was to try and experiment with
You might want to have a look at
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
This is what (I think) Thomas Lamprecht is referring to and it should also be
usable for corosync.
The advantage of this configuration over the bridged solution used by Uwe
Sauter is the zero
The consoles of my proxmox nodes keep showing the error
CIFS VFS: ioctl error in smb2_get_dfs_refer rc=-5
Apparently, the QNAP NAS serving shares to the cluster via CIFS, does not
support DFS.
This should not be flagged as an error and was recently fixed in kernel 4.17.
Hi MJ
Wouldn't it be useful to start a new thread then?
Stefan
On May 24, 2018, at 1:44 PM, mj
> wrote:
Having now read that page, I have a new question. :-)
NOTICE OF CONFIDENTIALITY: This communication may
5/2018 10:17, Stefan M. Radman ha scritto:
Hi Gregor
The update instructions are here:
https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2
The instructions mention the enterprise repository, will it work also for the
no-subscription one?
Regards
Simo
Hi Gregor
The update instructions are here:
https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2
Turning off the VMs is not necessary.
Just live-migrate the virtual machines away from the node you plan to upgrade
and move them back afterwards.
27 matches
Mail list logo