Hi,
I see the same behaviour with an EMC VNXe3200 (two priorities).
I assume it is the right thing to do, host really only has 2x1Gbit
channels to storage... :)
El 7/1/19 a las 10:37, Marco Gaiarin escribió:
Mandi! Sten Aus
In chel di` si favelave...
As this is my third storage for not
Mandi! Sten Aus
In chel di` si favelave...
> As this is my third storage for not critical data, I don't worry much, but I
> haven't checked if two paths (prio=50) get more load than other two (prio=10).
Apart from the prio, the differences between jessie multipath is that
in jessie all path
Hi
I've encountered same "issue" or "feature", not sure.
I've tried to edit/tweak multipath config, but "prio const" should say
that every path is with same priority.
As this is my third storage for not critical data, I don't worry much,
but I haven't checked if two paths (prio=50) get more
I've just upgraded my cluster frm 4.4 to latest 5.
Before that, i've also do a firmware upgrade of the SAN (HP MSA 1040),
but seems not related.
In old PVE 4.4/jessie i got:
dixie:~# multipath -ll
mpath0 (3600c0ff00026ed11a7cb56570100) dm-1 HP,MSA 1040 SAN
size=1.4T features='1
hi,
i have a problem with one of my clusters
the problem is the following:
even though i migrated all HA managed VMs of the node, for some
reason one of the nodes got fenced upon applying updates.
after the node cam back up i applied all updates and rebooted again.
now i have the problem that
Hi...
This is not a bug... Consider this more a suggestion than some kind of
error...
But everytime I make an on-line migration, I get this message status
Oct 14 09:10:54 migration status: active (transferred 16504607864,
remaining 134885376), total 10754924544)
Oct 14 09:10:54 migration xbzrle
Ok, this seems like a bug. It's rather cosmetic as it hasn't any effect
on the HA stack.
I will write a patch.
On 09/25/2015 05:11 PM, Gilberto Nunes wrote:
Nops... The server is down and I already remove it from corosync.conf
and with command pvecm delnode proxmox03...
2015-09-25 11:38
Hi, can you look into the syslog of your current master and see if there
are messages like:
detected unknown node state
On 09/24/2015 09:33 PM, Gilberto Nunes wrote:
Hi
Any idea how can I remove a delete node from HA??
It still appears when I run
ha-manager status
quorum OK
master
Nops... The server is down and I already remove it from corosync.conf and
with command pvecm delnode proxmox03...
2015-09-25 11:38 GMT-03:00 Thomas Lamprecht :
> Hi, can you look into the syslog of your current master and see if there
> are messages like:
>
> detected
Hi guys
I have a three nodes cluster with PVE 4 and HA enable.
Just one VM with 8GB memory and 1,5 TB of hard disk.
If the VM has 2 GB of memory and 32 GB of hard disk, HA works fine.
But now, with more memory and hard disk, the HA freeze the VM:
ha-manager status
quorum OK
master proxmox02
UPDATE
I get this on syslog
Sep 24 11:59:30 proxmox02 pve-ha-crm[1360]: service 'vm:120': state changed
from 'started' to 'freeze'
Sep 24 11:59:40 proxmox02 corosync[1304]: [TOTEM ] A new membership (
10.1.1.20:224) was formed. Members left: 1
Sep 24 11:59:40 proxmox02 pmxcfs[1055]: [dcdb]
Oh, and after you added the votes, delete the PVE VM and delete it with the
pvecm delnode oldvmnode-hostname
from the cluster
On 09/24/2015 05:08 PM, Thomas Lamprecht wrote:
Adapt the corosync.conf file in the /etc/pve folder there, set
quorum_votes of one of the node entries from 1 to 2.
Adapt the corosync.conf file in the /etc/pve folder there, set
quorum_votes of one of the node entries from 1 to 2.
This achieves the same as your current setup but with much less overhead
as no VM, which acts as PVE node, is needed.
*NOTE*: for real HA three real nodes are needed.
If you
> I get this on syslog
>
> Sep 24 11:59:30 proxmox02 pve-ha-crm[1360]: service 'vm:120': state changed
> from 'started' to 'freeze'
> Sep 24 11:59:40 proxmox02 corosync[1304]: [TOTEM ] A new membership (
> 10.1.1.20:224) was formed. Members left: 1
> Sep 24 11:59:40 proxmox02 pmxcfs[1055]:
It's work with 3 nodes, when I run pvecm expected 3!!
2015-09-24 12:08 GMT-03:00 Thomas Lamprecht :
> Adapt the corosync.conf file in the /etc/pve folder there, set
> quorum_votes of one of the node entries from 1 to 2.
>
> This achieves the same as your current setup
Of course! That what I expect since HA has 3 nodes. If NODE 1 go crash,
NODE 2 was suppose to act...
2015-09-24 12:51 GMT-03:00 Dietmar Maurer :
> > I get this on syslog
> >
> > Sep 24 11:59:30 proxmox02 pve-ha-crm[1360]: service 'vm:120': state
> changed
> > from 'started'
> Of course! That what I expect since HA has 3 nodes. If NODE 1 go crash,
state 'freeze' means that node 1 was shutdown normally. So HA waits until that
the
node starts again.
___
pve-user mailing list
pve-user@pve.proxmox.com
Nice shot Thomas!
I need changes in quorum as well, adding two_node: 1
Now is work fine with two nodes..
Thanks a lot!
2015-09-24 12:09 GMT-03:00 Thomas Lamprecht :
> Oh, and after you added the votes, delete the PVE VM and delete it with the
> pvecm delnode
Hi all...
I deploy two servers here, with PVE 4 and DRBD9.
Same days ago, I was able to install and perform same live migration.
Now,I can do it anymore...
I deploy a new VM with Windows XP, just for test but I can't migrate...
I get this error:
Sep 11 10:57:03 starting migration of VM 101 to
Am 08.09.2015 um 08:47 schrieb Thomas Lamprecht:
>
>> Am 04.09.2015 um 18:43 schrieb Gilberto Nunes:
>>> I'm in trouble here with cluster construction...
>>> Do you had some trouble with cluster configuration??
>> Yes, I did have troubles - adding a node failed. What's the problem in
>> your
Am 04.09.2015 um 18:43 schrieb Gilberto Nunes:
I'm in trouble here with cluster construction...
Do you had some trouble with cluster configuration??
Yes, I did have troubles - adding a node failed. What's the problem in
your case?
How did you add the node? What was the failure message?
Hi again... Well... I don't remmeber right know, but I was able to create
the cluster, but unable to add the other node...
Finally, I give up, format both servers and re-install everything from
scratch
Now it's works nicely...
Thanks
2015-09-07 19:03 GMT-03:00 Hermann Himmelbauer
Am 04.09.2015 um 14:19 schrieb Gilberto Nunes:
> Hi guys
> Somebody here is using PVE 4???
> How is it??
Yes, I personally use it and like it. But I have no comparison to PVE 3
as I'm a newcomer to PVE.
It seems PVE 4 is quite stable but I experience the following shortcomings:
- No ceph
Hi Hermann..
I am more interessting in HA fetures, mainly watchdog/softdog...
I will try it
Thanks a lot
2015-09-04 9:28 GMT-03:00 Hermann Himmelbauer :
> Am 04.09.2015 um 14:19 schrieb Gilberto Nunes:
> > Hi guys
> > Somebody here is using PVE 4???
> > How is it??
>
> Yes, I
Hi guys
Somebody here is using PVE 4???
How is it??
--
Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
yes but not in any production environment yet
On 9/4/2015 8:19 AM, Gilberto Nunes wrote:
Hi guys
Somebody here is using PVE 4???
How is it??
--
Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
26 matches
Mail list logo