On Wed, 2018-06-27 at 18:01 +0300, Vaggelis Papastavros wrote:
> Dear friends ,
> ( i send the same messages again in order to conform with the forum
> text formatting )
> Many Thanks for your brilliant answers ,
> Ken your suggestion :
> "The second problem is that you have an ordering constrai
Dear friends ,
( i send the same messages again in order to conform with the forum text
formatting )
Many Thanks for your brilliant answers ,
Ken your suggestion :
"The second problem is that you have an ordering constraint but no
colocation constraint. With your current setup, windows_VM h
Hello Vaggelis,
just a technical meta-note below:
On 27/06/18 11:09 +0300, Vaggelis Papastavros wrote:
> *My question Ken is : are the below steps (in red enough) to ensure
> that the new VM will be placed on the node 1 ?*
It's an unwritten convention to refrain from HTML formatted messages
with
Many Thanks for your brilliant answers ,
Ken your suggestion :
"The second problem is that you have an ordering constraint but no
colocation constraint. With your current setup, windows_VM has to start
after the storage, but it doesn't have to start on the same node. You
need a colocation con
26.06.2018 19:36, Ken Gaillot пишет:
>
> One problem is that you are creating the VM, and then later adding
> constraints about what the cluster can do with it. Therefore there is a
> time in between where the cluster can start it without any constraint.
> The solution is to make your changes all
On Tue, 2018-06-26 at 18:24 +0300, Vaggelis Papastavros wrote:
> Many thanks for the excellent answer ,
> Ken after investigation of the log files :
> In our environment we have two drbd partitions one for customer_vms
> and on for sigma_vms
> For the customer_vms the active node is node2 and for
I think that you need pcs resource create windows_VM_res VirtualDomain
hypervisor="qemu:///system"
config="/opt/sigma_vms/xml_definitions/windows_VM.xml"
meta target-role=Stopped
In this way, pacemaker doesn't starts the resources
2018-06-26 17:24 GMT+02:00 Vaggelis Papastavros :
> Many thanks
Many thanks for the excellent answer ,
Ken after investigation of the log files :
In our environment we have two drbd partitions one for customer_vms and
on for sigma_vms
For the customer_vms the active node is node2 and for the sigma_vms the
active node is node1 .
[root@sgw-01 drbd.d]# dr
On Mon, 2018-06-25 at 09:47 -0500, Ken Gaillot wrote:
> On Mon, 2018-06-25 at 11:33 +0300, Vaggelis Papastavros wrote:
> > Dear friends ,
> >
> > We have the following configuration :
> >
> > CentOS7 , pacemaker 0.9.152 and Corosync 2.4.0, storage with DRBD
> > and
> > stonith eanbled with APC P
On Mon, 2018-06-25 at 11:33 +0300, Vaggelis Papastavros wrote:
> Dear friends ,
>
> We have the following configuration :
>
> CentOS7 , pacemaker 0.9.152 and Corosync 2.4.0, storage with DRBD
> and
> stonith eanbled with APC PDU devices.
>
> I have a windows VM configured as cluster resource wi
Dear friends ,
We have the following configuration :
CentOS7 , pacemaker 0.9.152 and Corosync 2.4.0, storage with DRBD and
stonith eanbled with APC PDU devices.
I have a windows VM configured as cluster resource with the following
attributes :
Resource: WindowSentinelOne_res (class=ocf pro
11 matches
Mail list logo