Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Cédric Dufour - Idiap Research Institute
One thing you should keep it mind in the context of HA:

If migration is triggered by a node failing somehow (but still sane
enough to allow migration), and several VMs must be migrated
simultaneously, then beware of network bandwith vs. VMs size (RAM-wise)!
Migrations may timeout *way* before they're done.

Or, when manually migrating many VMs away from a node, make sure to
proceed with no more VMs-at-a-time than your bandwith and timeouts allow.

(been there for you ;-) )

On 04/02/16 14:17, Kyle O'Donnell wrote:
> Thanks very much Cédric
>
> I've added migrate_to/from to my config:
>
> primitive tome_kvm ocf:heartbeat:VirtualDomain \
> params config="/ocfs2/d01/tome/tome.xml" hypervisor="qemu:///system" 
> migration_transport="ssh" force_stop="false" \
> meta allow-migrate="true" target-role="Started" \
> op start timeout="120s" interval="0" \
> op stop timeout="120s" interval="0" \
> op monitor timeout="30" interval="10" depth="0" \
> op migrate_to timeout="60s" interval="0" \
> op migrate_from timeout="60s" interval="0" \
> utilization cpu="4" hv_memory="4096"
>
> when I run crm resource migrate guest nodeX nothing happens now:
>
> # crm resource status tome_kvm
> resource tome_kvm is running on: ny4j1-kvm02 
> # crm resource migrate tome_kvm ny4j1-kvm01 
> # echo $?
> 0
> # crm resource status tome_kvm
> resource tome_kvm is running on: ny4j1-kvm02 
>
>  and I just figured it out!
>
> I had:
> location cli-prefer-tome tome_kvm inf: ny4j1-kvm02
>
> removed that and I am all good!
>
> Thanks everyone!!!
>
>
> - Original Message -
> From: "Cédric Dufour - Idiap Research Institute" 
> To: "users" 
> Sent: Thursday, February 4, 2016 8:09:10 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
>
> Hello,
>
> Here, we have live migration working like a charm through the cluster.
> Below the XML expert of a resource configuration:
>
> 
>   
>  type="LibvirtQemu">
>   
>  value="/havc/config/libvirt/FOOBAR.xml"/>
>   
>   
>  name="allow-migrate" value="true"/>
>   
>   
>  timeout="30s" interval="60s"/>
>  interval="0"/>
>  interval="0"/>
>  timeout="60s" interval="0"/>
>  timeout="60s" interval="0"/>
>   
> 
>   
> 
>
> The LibvirtQemu agent is a custom one derived from the VirtualDomain
> agent (for reasons that are off-topic).
>
> The points worth seeing are:
>
> - the "allow-migrate" meta attribute (see
> http://www.linux-ha.org/wiki/VirtualDomain_%28resource_agent%29 "If the
> allow-migrate meta parameter is set to true, then a resource migration
> will not map to a domain shutdown/startup cycle, but to an actual,
> potentially live, resource migration between cluster nodes. ")
>
> - the "migrate-from" and "migrate-to" timeouts (which must be set
> relative to how big - RAM-wise - your VMs are and the bandwidth
> available for migration); passed this timeout, the migration will be
> interrupted and the VM will be shutdown/restarted
>
> Hope it helps,
>
> Cédric
>
>
>
> On 04/02/16 13:44, Kyle O'Donnell wrote:
>> That is helpful but I think I am looking at the wrong documentation:
>>
>> http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent)
>> http://linux-ha.org/doc/man-pages/re-ra-VirtualDomain.html
>>
>> Can you point me to the docs you are referencing?
>>
>> - Original Message -
>> From: "RaSca" 
>> To: "users" 
>> Sent: Thursday, February 4, 2016 6:48:26 AM
>> Subject: Re: [ClusterLabs] kvm live migration, resource moving
>>
>> If your environment is successfully configured even from the libvirt
>> side, everything should work out of the box, if it does not work you can
>> pass migrate_options to make it work.
>>
>> From the resource agent documentation:
>>
>> migrate_options:  Extra virsh options for the guest live migration. You
>> can also specify here --migrateuri if the calculated migrate URI is
>> unsuitable for your environment. If --migrateuri is set then
>> migration_network_suffix and migrateport are effectively ignored. Use
>> "%n" as the placeholder for the target node name.
>> Please refer to the libvirt documentation for details on guest migration.
>>
>> Hope this helps,
>>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Re: [ClusterLabs] HA configuration

2016-02-04 Thread emmanuel segura
you need to be sure that your redis resources has master/slave support
and I think this colocation need to be invert

colocation resource_location1 inf: redis_clone:Master kamailio

to

colocation resource_location1 inf: kamailio redis_clone:Master

You need a order too:

order resource_order1 inf: redis_clone:promote kamailio:start

Anyway if you want to make more simple your config, make a group:

group mygroup myresource myvip

colocation resource_location1 inf: mygroup redis_clone:Master
order resource_order1 inf: redis_clone:promote mygroup:start

2016-02-04 11:14 GMT+01:00 Rishin Gangadharan :
> Hi All,
>
>  Could you please help me for  the corosync/pacemaker configuration with
> crmsh.
>
>
>
> My requirments
>
>   I have three resources
>
> 1.   VIP
>
> 2.   Kamailio
>
> 3.   Redis DB
>
> I want to configure HA for kamailo with VIP and Redis Master/Slave mode.i
> have configured VIP and kamailio and its working fine, ie when kamailio
> process fails VIP will switch to another machine and start kamailio.
>
> When kamailio fails first I want to move VIP then Redis and redis must
> switch to Master and Active node should be slave
>
>
>
> Ie  Node 1 : Active  (Running Resourcses VIP,Redis:Master ,Kamailio)
>
>  Node 2 : Passive ( Redis as slave)
>
>
>
> My aim is  when Kamailio or any resource in Node1 fails it should be like
> this
>
>
>
> Node 2 : Active  (Running Resourcses VIP,Redis:Master ,Kamailio)
>
>  Node 1 : Passive ( Redis as slave)
>
> Crm configure edit
>
>
>
> node PCSCF
>
> node PCSCF18
>
> primitive VIP IPaddr2 \
>
> params ip=10.193.30.28 nic=eth0 \
>
> op monitor interval=2s \
>
> meta is-managed=true target-role=Started
>
> primitive kamailio ocf:kamailio:kamailio_ra \
>
> op start interval=5s \
>
> op monitor interval=2s \
>
> meta migration-threshold=1 failure-timeout=5s
>
> primitive redis ocf:kamailio:redis \
>
> meta target-role=Master is-managed=true \
>
> op monitor interval=1s role=Master timeout=5s on-fail=restart \
>
> op monitor interval=1s role=Slave timeout=5s on-fail=restart
>
> ms redis_clone redis \
>
> meta notify=true is-managed=true ordered=false interleave=false
> globally-unique=false target-role=Stopped migration-threshold=1
>
> colocation resource_location inf: kamailio VIP
>
> colocation resource_location1 inf: redis_clone:Master kamailio
>
> order resource_starting_order inf: VIP kamailio
>
> property cib-bootstrap-options: \
>
> dc-version=1.1.11-97629de \
>
> cluster-infrastructure="classic openais (with plugin)" \
>
> expected-quorum-votes=3 \
>
> stonith-enabled=false \
>
> no-quorum-policy=ignore \
>
> last-lrm-refresh=1454577107
>
> property redis_replication: \
>
> redis_REPL_INFO=PCSCF
>
>
>
>
>
> 
> 
> Disclaimer: This message and the information contained herein is proprietary
> and confidential and subject to the Tech Mahindra policy statement, you may
> review the policy at http://www.techmahindra.com/Disclaimer.html externally
> http://tim.techmahindra.com/tim/disclaimer.html internally within
> TechMahindra.
> 
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [OCF] Pacemaker reports a multi-state clone resource instance as running while it is not in fact

2016-02-04 Thread Bogdan Dobrelya
Hello.
Regarding the original issue, good news are the resource-agents
ocf-shellfuncs is no more causing fork bombs to the dummy OCF RA [0]
after the fix [1] done. The bad news are that "self-forking" monitors
issue seems remaining for the rabbitmq OCF RA [2], and I can reproduce
it for another custom agent [3], so I'd guess it may be a valid for
another ones as well.

IIUC, the issue seems related to how lrmd's forking monitor actions.
I tried to debug both pacemaker 1.1.10, 1.1.12 with gdb as the following:

# cat ./cmds
set follow-fork-mode child
set detach-on-fork off
set follow-exec-mode new
catch fork
catch vfork
cont
# gdb -x cmds /usr/lib/pacemaker/lrmd `pgrep lrmd`

I can confirm it catches forked monitors and makes nested forks as well.
But I have *many* debug symbols missing, bt is full of question marks
and, honestly, I'm not a gdb guru and do not now that to check in for
reproduced cases.

So any help with how to troubleshooting things further are very appreciated!

[0] https://github.com/bogdando/dummy-ocf-ra
[1] https://github.com/ClusterLabs/resource-agents/issues/734
[2]
https://github.com/rabbitmq/rabbitmq-server/blob/master/scripts/rabbitmq-server-ha.ocf
[3]
https://git.openstack.org/cgit/openstack/fuel-library/tree/files/fuel-ha-utils/ocf/ns_vrouter

On 04.01.2016 17:33, Bogdan Dobrelya wrote:
> On 04.01.2016 17:14, Dejan Muhamedagic wrote:
>> Hi,
>>
>> On Mon, Jan 04, 2016 at 04:52:43PM +0100, Bogdan Dobrelya wrote:
>>> On 04.01.2016 16:36, Ken Gaillot wrote:
 On 01/04/2016 09:25 AM, Bogdan Dobrelya wrote:
> On 04.01.2016 15:50, Bogdan Dobrelya wrote:
>> [...]
> Also note, that lrmd spawns *many* monitors like:
> root  6495  0.0  0.0  70268  1456 ?Ss2015   4:56  \_
> /usr/lib/pacemaker/lrmd
> root 31815  0.0  0.0   4440   780 ?S15:08   0:00  |   \_
> /bin/sh /usr/lib/ocf/resource.d/dummy/dummy monitor
> root 31908  0.0  0.0   4440   388 ?S15:08   0:00  |
>   \_ /bin/sh /usr/lib/ocf/resource.d/dummy/dummy monitor
> root 31910  0.0  0.0   4440   384 ?S15:08   0:00  |
>   \_ /bin/sh /usr/lib/ocf/resource.d/dummy/dummy monitor
> root 31915  0.0  0.0   4440   392 ?S15:08   0:00  |
>   \_ /bin/sh /usr/lib/ocf/resource.d/dummy/dummy monitor
> ...

 At first glance, that looks like your monitor action is calling itself
 recursively, but I don't see how in your code.
>>>
>>> Yes, it should be a bug in the ocf-shellfuncs's ocf_log().
>>
>> If you're sure about that, please open an issue at
>> https://github.com/ClusterLabs/resource-agents/issues
> 
> Submitted [0]. Thank you!
> Note, that it seems the very import action causes the issue, not the
> ocf_run or ocf_log code itself.
> 
> [0] https://github.com/ClusterLabs/resource-agents/issues/734
> 
>>
>> Thanks,
>>
>> Dejan
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
great.

i think i am sorted now.  thanks again everyone.

- Original Message -
From: "RaSca" 
To: "users" 
Sent: Thursday, February 4, 2016 9:23:44 AM
Subject: Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving

The point here is that every time you do a manual migration a location
constraint of weight inf is created. You need to remember this or,
instead, once the migration has completed you can just do an unmigrate
(or unmove). This will remove the constraint.
Or you can do as Ulrich suggested, specifying a time when migrating.

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
ra...@miamammausalinux.org
http://www.miamammausalinux.org

Il giorno 4/2/2016 14:52:52, Kyle O'Donnell ha scritto:
> you mean instead of inf: ##: I thought the number was just a 
> preference/priority (lower=higher prior)?
> 
> - Original Message -
> From: "Ulrich Windl" 
> To: "users" 
> Sent: Thursday, February 4, 2016 8:47:23 AM
> Subject: [ClusterLabs] Antw: Re:  kvm live migration, resource moving
> 
 Kyle O'Donnell  schrieb am 04.02.2016 um 14:17 in Nachricht
> <124846465.11794.1454591851253.javamail.zim...@0b10.mx>:
> 
> [...]
>> I had:
>> location cli-prefer-tome tome_kvm inf: ny4j1-kvm02
>>
>> removed that and I am all good!
> [...]
> 
> That's why I ALWAYS specify a time when migrating resources. So if you forget 
> to unmigrate, the next migration will work most likely.
> 
> Regards,
> Ulrich
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Ulrich Windl
Hi!

No, something like "crm resource migrate rsc PT5M".

>>> Kyle O'Donnell  schrieb am 04.02.2016 um 14:52 in Nachricht
<1203120444.11849.1454593972602.javamail.zim...@0b10.mx>:
> you mean instead of inf: ##: I thought the number was just a 
> preference/priority (lower=higher prior)?
> 
> - Original Message -
> From: "Ulrich Windl" 
> To: "users" 
> Sent: Thursday, February 4, 2016 8:47:23 AM
> Subject: [ClusterLabs] Antw: Re:  kvm live migration, resource moving
> 
 Kyle O'Donnell  schrieb am 04.02.2016 um 14:17 in Nachricht
> <124846465.11794.1454591851253.javamail.zim...@0b10.mx>:
> 
> [...]
>> I had:
>> location cli-prefer-tome tome_kvm inf: ny4j1-kvm02
>> 
>> removed that and I am all good!
> [...]
> 
> That's why I ALWAYS specify a time when migrating resources. So if you 
> forget to unmigrate, the next migration will work most likely.
> 
> Regards,
> Ulrich
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread RaSca
The point here is that every time you do a manual migration a location
constraint of weight inf is created. You need to remember this or,
instead, once the migration has completed you can just do an unmigrate
(or unmove). This will remove the constraint.
Or you can do as Ulrich suggested, specifying a time when migrating.

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
ra...@miamammausalinux.org
http://www.miamammausalinux.org

Il giorno 4/2/2016 14:52:52, Kyle O'Donnell ha scritto:
> you mean instead of inf: ##: I thought the number was just a 
> preference/priority (lower=higher prior)?
> 
> - Original Message -
> From: "Ulrich Windl" 
> To: "users" 
> Sent: Thursday, February 4, 2016 8:47:23 AM
> Subject: [ClusterLabs] Antw: Re:  kvm live migration, resource moving
> 
 Kyle O'Donnell  schrieb am 04.02.2016 um 14:17 in Nachricht
> <124846465.11794.1454591851253.javamail.zim...@0b10.mx>:
> 
> [...]
>> I had:
>> location cli-prefer-tome tome_kvm inf: ny4j1-kvm02
>>
>> removed that and I am all good!
> [...]
> 
> That's why I ALWAYS specify a time when migrating resources. So if you forget 
> to unmigrate, the next migration will work most likely.
> 
> Regards,
> Ulrich
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
you mean instead of inf: ##: I thought the number was just a 
preference/priority (lower=higher prior)?

- Original Message -
From: "Ulrich Windl" 
To: "users" 
Sent: Thursday, February 4, 2016 8:47:23 AM
Subject: [ClusterLabs] Antw: Re:  kvm live migration, resource moving

>>> Kyle O'Donnell  schrieb am 04.02.2016 um 14:17 in Nachricht
<124846465.11794.1454591851253.javamail.zim...@0b10.mx>:

[...]
> I had:
> location cli-prefer-tome tome_kvm inf: ny4j1-kvm02
> 
> removed that and I am all good!
[...]

That's why I ALWAYS specify a time when migrating resources. So if you forget 
to unmigrate, the next migration will work most likely.

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Ulrich Windl
>>> Kyle O'Donnell  schrieb am 04.02.2016 um 14:17 in Nachricht
<124846465.11794.1454591851253.javamail.zim...@0b10.mx>:

[...]
> I had:
> location cli-prefer-tome tome_kvm inf: ny4j1-kvm02
> 
> removed that and I am all good!
[...]

That's why I ALWAYS specify a time when migrating resources. So if you forget 
to unmigrate, the next migration will work most likely.

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
Thanks very much Cédric

I've added migrate_to/from to my config:

primitive tome_kvm ocf:heartbeat:VirtualDomain \
params config="/ocfs2/d01/tome/tome.xml" hypervisor="qemu:///system" 
migration_transport="ssh" force_stop="false" \
meta allow-migrate="true" target-role="Started" \
op start timeout="120s" interval="0" \
op stop timeout="120s" interval="0" \
op monitor timeout="30" interval="10" depth="0" \
op migrate_to timeout="60s" interval="0" \
op migrate_from timeout="60s" interval="0" \
utilization cpu="4" hv_memory="4096"

when I run crm resource migrate guest nodeX nothing happens now:

# crm resource status tome_kvm
resource tome_kvm is running on: ny4j1-kvm02 
# crm resource migrate tome_kvm ny4j1-kvm01 
# echo $?
0
# crm resource status tome_kvm
resource tome_kvm is running on: ny4j1-kvm02 

 and I just figured it out!

I had:
location cli-prefer-tome tome_kvm inf: ny4j1-kvm02

removed that and I am all good!

Thanks everyone!!!


- Original Message -
From: "Cédric Dufour - Idiap Research Institute" 
To: "users" 
Sent: Thursday, February 4, 2016 8:09:10 AM
Subject: Re: [ClusterLabs] kvm live migration, resource moving

Hello,

Here, we have live migration working like a charm through the cluster.
Below the XML expert of a resource configuration:


  

  

  
  

  
  





  

  


The LibvirtQemu agent is a custom one derived from the VirtualDomain
agent (for reasons that are off-topic).

The points worth seeing are:

- the "allow-migrate" meta attribute (see
http://www.linux-ha.org/wiki/VirtualDomain_%28resource_agent%29 "If the
allow-migrate meta parameter is set to true, then a resource migration
will not map to a domain shutdown/startup cycle, but to an actual,
potentially live, resource migration between cluster nodes. ")

- the "migrate-from" and "migrate-to" timeouts (which must be set
relative to how big - RAM-wise - your VMs are and the bandwidth
available for migration); passed this timeout, the migration will be
interrupted and the VM will be shutdown/restarted

Hope it helps,

Cédric



On 04/02/16 13:44, Kyle O'Donnell wrote:
> That is helpful but I think I am looking at the wrong documentation:
>
> http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent)
> http://linux-ha.org/doc/man-pages/re-ra-VirtualDomain.html
>
> Can you point me to the docs you are referencing?
>
> - Original Message -
> From: "RaSca" 
> To: "users" 
> Sent: Thursday, February 4, 2016 6:48:26 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
>
> If your environment is successfully configured even from the libvirt
> side, everything should work out of the box, if it does not work you can
> pass migrate_options to make it work.
>
> From the resource agent documentation:
>
> migrate_options:  Extra virsh options for the guest live migration. You
> can also specify here --migrateuri if the calculated migrate URI is
> unsuitable for your environment. If --migrateuri is set then
> migration_network_suffix and migrateport are effectively ignored. Use
> "%n" as the placeholder for the target node name.
> Please refer to the libvirt documentation for details on guest migration.
>
> Hope this helps,
>


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Cédric Dufour - Idiap Research Institute
Hello,

Here, we have live migration working like a charm through the cluster.
Below the XML expert of a resource configuration:


  

  

  
  

  
  





  

  


The LibvirtQemu agent is a custom one derived from the VirtualDomain
agent (for reasons that are off-topic).

The points worth seeing are:

- the "allow-migrate" meta attribute (see
http://www.linux-ha.org/wiki/VirtualDomain_%28resource_agent%29 "If the
allow-migrate meta parameter is set to true, then a resource migration
will not map to a domain shutdown/startup cycle, but to an actual,
potentially live, resource migration between cluster nodes. ")

- the "migrate-from" and "migrate-to" timeouts (which must be set
relative to how big - RAM-wise - your VMs are and the bandwidth
available for migration); passed this timeout, the migration will be
interrupted and the VM will be shutdown/restarted

Hope it helps,

Cédric



On 04/02/16 13:44, Kyle O'Donnell wrote:
> That is helpful but I think I am looking at the wrong documentation:
>
> http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent)
> http://linux-ha.org/doc/man-pages/re-ra-VirtualDomain.html
>
> Can you point me to the docs you are referencing?
>
> - Original Message -
> From: "RaSca" 
> To: "users" 
> Sent: Thursday, February 4, 2016 6:48:26 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
>
> If your environment is successfully configured even from the libvirt
> side, everything should work out of the box, if it does not work you can
> pass migrate_options to make it work.
>
> From the resource agent documentation:
>
> migrate_options:  Extra virsh options for the guest live migration. You
> can also specify here --migrateuri if the calculated migrate URI is
> unsuitable for your environment. If --migrateuri is set then
> migration_network_suffix and migrateport are effectively ignored. Use
> "%n" as the placeholder for the target node name.
> Please refer to the libvirt documentation for details on guest migration.
>
> Hope this helps,
>


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
That is helpful but I think I am looking at the wrong documentation:

http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent)
http://linux-ha.org/doc/man-pages/re-ra-VirtualDomain.html

Can you point me to the docs you are referencing?

- Original Message -
From: "RaSca" 
To: "users" 
Sent: Thursday, February 4, 2016 6:48:26 AM
Subject: Re: [ClusterLabs] kvm live migration, resource moving

If your environment is successfully configured even from the libvirt
side, everything should work out of the box, if it does not work you can
pass migrate_options to make it work.

From the resource agent documentation:

migrate_options:  Extra virsh options for the guest live migration. You
can also specify here --migrateuri if the calculated migrate URI is
unsuitable for your environment. If --migrateuri is set then
migration_network_suffix and migrateport are effectively ignored. Use
"%n" as the placeholder for the target node name.
Please refer to the libvirt documentation for details on guest migration.

Hope this helps,

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
ra...@miamammausalinux.org
http://www.miamammausalinux.org

Il giorno 4/2/2016 12:09:19, Kyle O'Donnell ha scritto:
> I explicitly stated I was using the VirtualDomain resource agent.
> 
> I did not know it supported live migration as when I ran crm resource move 
> guest nodeX, it shuts down the guest before moving it.
> 
> Let me rephrase my question... How do I use the VirtualDomain resource agent 
> to live migrate a kvm guest between nodes in my cluster.
> 
> pacemaker 1.1.10+git20130802-1ubuntu2.3
> resource-agents 1:3.9.3+git20121009-3ubuntu2
> 
> - Original Message -
> From: "RaSca" 
> To: "users" 
> Sent: Thursday, February 4, 2016 4:30:18 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
> 
> The VirtualDomain resource agent supports live migration as well as
> virsh command, that's because it actually USE virsh, so again, why
> aren't you using directly VirtualDomain?
> 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread RaSca
If your environment is successfully configured even from the libvirt
side, everything should work out of the box, if it does not work you can
pass migrate_options to make it work.

From the resource agent documentation:

migrate_options:  Extra virsh options for the guest live migration. You
can also specify here --migrateuri if the calculated migrate URI is
unsuitable for your environment. If --migrateuri is set then
migration_network_suffix and migrateport are effectively ignored. Use
"%n" as the placeholder for the target node name.
Please refer to the libvirt documentation for details on guest migration.

Hope this helps,

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
ra...@miamammausalinux.org
http://www.miamammausalinux.org

Il giorno 4/2/2016 12:09:19, Kyle O'Donnell ha scritto:
> I explicitly stated I was using the VirtualDomain resource agent.
> 
> I did not know it supported live migration as when I ran crm resource move 
> guest nodeX, it shuts down the guest before moving it.
> 
> Let me rephrase my question... How do I use the VirtualDomain resource agent 
> to live migrate a kvm guest between nodes in my cluster.
> 
> pacemaker 1.1.10+git20130802-1ubuntu2.3
> resource-agents 1:3.9.3+git20121009-3ubuntu2
> 
> - Original Message -
> From: "RaSca" 
> To: "users" 
> Sent: Thursday, February 4, 2016 4:30:18 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
> 
> The VirtualDomain resource agent supports live migration as well as
> virsh command, that's because it actually USE virsh, so again, why
> aren't you using directly VirtualDomain?
> 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
I explicitly stated I was using the VirtualDomain resource agent.

I did not know it supported live migration as when I ran crm resource move 
guest nodeX, it shuts down the guest before moving it.

Let me rephrase my question... How do I use the VirtualDomain resource agent to 
live migrate a kvm guest between nodes in my cluster.

pacemaker 1.1.10+git20130802-1ubuntu2.3
resource-agents 1:3.9.3+git20121009-3ubuntu2

- Original Message -
From: "RaSca" 
To: "users" 
Sent: Thursday, February 4, 2016 4:30:18 AM
Subject: Re: [ClusterLabs] kvm live migration, resource moving

The VirtualDomain resource agent supports live migration as well as
virsh command, that's because it actually USE virsh, so again, why
aren't you using directly VirtualDomain?

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
ra...@miamammausalinux.org
http://www.miamammausalinux.org

Il giorno 3/2/2016 18:28:09, Kyle O'Donnell ha scritto:
> Hi Rasca,
> 
> Because the command 'virsh migrate --live' command will move the vm without 
> shutting down the guest operating system.  It transfers the running state of 
> the vm from one host to another.
> 
> -Kyle
> 
> - Original Message -
> From: "RaSca" 
> To: "users" 
> Sent: Wednesday, February 3, 2016 11:54:12 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
> 
> It is not clear to me why you need to do things by hand. Why are you
> using virsh once you could do a resource move within the cluster?
> 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] HA configuration

2016-02-04 Thread Rishin Gangadharan
Hi All,
 Could you please help me for  the corosync/pacemaker configuration with crmsh.

My requirments
  I have three resources

1.   VIP

2.   Kamailio

3.   Redis DB

I want to configure HA for kamailo with VIP and Redis Master/Slave mode.i have 
configured VIP and kamailio and its working fine, ie when kamailio process 
fails VIP will switch to another machine and start kamailio.

When kamailio fails first I want to move VIP then Redis and redis must switch 
to Master and Active node should be slave



Ie  Node 1 : Active  (Running Resourcses VIP,Redis:Master ,Kamailio)

 Node 2 : Passive ( Redis as slave)



My aim is  when Kamailio or any resource in Node1 fails it should be like this



Node 2 : Active  (Running Resourcses VIP,Redis:Master ,Kamailio)

 Node 1 : Passive ( Redis as slave)
Crm configure edit

node PCSCF
node PCSCF18
primitive VIP IPaddr2 \
params ip=10.193.30.28 nic=eth0 \
op monitor interval=2s \
meta is-managed=true target-role=Started
primitive kamailio ocf:kamailio:kamailio_ra \
op start interval=5s \
op monitor interval=2s \
meta migration-threshold=1 failure-timeout=5s
primitive redis ocf:kamailio:redis \
meta target-role=Master is-managed=true \
op monitor interval=1s role=Master timeout=5s on-fail=restart \
op monitor interval=1s role=Slave timeout=5s on-fail=restart
ms redis_clone redis \
meta notify=true is-managed=true ordered=false interleave=false 
globally-unique=false target-role=Stopped migration-threshold=1
colocation resource_location inf: kamailio VIP
colocation resource_location1 inf: redis_clone:Master kamailio
order resource_starting_order inf: VIP kamailio
property cib-bootstrap-options: \
dc-version=1.1.11-97629de \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=3 \
stonith-enabled=false \
no-quorum-policy=ignore \
last-lrm-refresh=1454577107
property redis_replication: \
redis_REPL_INFO=PCSCF





Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [Announce] libqb 1.0rc2 release (fixed subject)

2016-02-04 Thread Christine Caulfield
On 03/02/16 17:45, Jan Pokorný wrote:
> On 02/02/16 11:05 +, Christine Caulfield wrote:
>> I am pleased to announce the second 1.0 release candidate release of
>> libqb. Huge thanks to all those who have contributed to this release.
> 
> IIUIC, good news is that so far 1.0.0 is a drop-in replacement for
> 0.17.2.
> 

Yes, there are no ABI changes since 0.17.2. 1.0.0 is intended to be a
stable base for firther development.

> For convenience, there are EPEL/Fedora builds for testing:
> https://copr.fedorainfracloud.org/coprs/jpokorny/libqb/build/157747/
> 
> Specfile has already been changed akin to:
> https://github.com/ClusterLabs/libqb/pull/174
> 

That's been merged, thank you. And thanks for doing Fedora builds too.

Chrissie

> EPEL5 builds omitted as they fail due to autoconf being too ancient
> (i.e., not because of the changes in the mentioned PR).
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread RaSca
The VirtualDomain resource agent supports live migration as well as
virsh command, that's because it actually USE virsh, so again, why
aren't you using directly VirtualDomain?

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
ra...@miamammausalinux.org
http://www.miamammausalinux.org

Il giorno 3/2/2016 18:28:09, Kyle O'Donnell ha scritto:
> Hi Rasca,
> 
> Because the command 'virsh migrate --live' command will move the vm without 
> shutting down the guest operating system.  It transfers the running state of 
> the vm from one host to another.
> 
> -Kyle
> 
> - Original Message -
> From: "RaSca" 
> To: "users" 
> Sent: Wednesday, February 3, 2016 11:54:12 AM
> Subject: Re: [ClusterLabs] kvm live migration, resource moving
> 
> It is not clear to me why you need to do things by hand. Why are you
> using virsh once you could do a resource move within the cluster?
> 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org