Re: [ClusterLabs] Why "Stop" action isn't called during failover?

2017-12-01 Thread Ken Gaillot
On Tue, 2017-11-21 at 12:58 +0200, Euronas Support wrote:
> Thanks for the answer Ken,
> The constraints are:
> 
> colocation vmgi_with_filesystem1 inf: vmgi filesystem1
> colocation vmgi_with_libvirtd inf: vmgi cl_libvirtd
> order vmgi_after_filesystem1 inf: filesystem1 vmgi
> order vmgi_after_libvirtd inf: cl_libvirtd vmgi

Those look good as far as ordering vmgi relative to the filesystem, but
I see below that it's vm_lomem1 that's left running. Is vmgi a group
containing vm_lomem1?

> On 20.11.2017 16:44:00 Ken Gaillot wrote:
> > On Fri, 2017-11-10 at 11:15 +0200, Klecho wrote:
> > > Hi List,
> > > 
> > > I have a VM, which is constraint dependant on its storage
> > > resource.
> > > 
> > > When the storage resource goes down, I'm observing the following:
> > > 
> > > (pacemaker 1.1.16 & corosync 2.4.2)
> > > 
> > > Nov 10 10:04:36 [1202] NODE-2pengine: info:
> > > LogActions:  
> > > Leave   vm_lomem1   (Started NODE-2)
> > > 
> > > Filesystem(p_AA_Filesystem_Drive16)[2097324]: 2017/11/10_10:04:37
> > > INFO: 
> > > sending signal TERM to: libvirt+ 1160142   1  0 09:01
> > > ?
> > > Sl 0:07 qemu-system-x86_64
> > > 
> > > 
> > > The VM (VirtualDomain RA) gets killed without calling "Stop" RA
> > > action.
> > > 
> > > Isn't the proper way to call "Stop" for all related resources in
> > > such
> > > cases?
> > 
> > Above, it's not Pacemaker that's killing the VM, it's the
> > Filesystem
> > resource itself.
> > 
> > When the Filesystem agent gets a stop request, if it's unable the
> > unmount the filesystem, it can try further action according to its
> > force_unmount option: "This option allows specifying how to handle
> > processes that are currently accessing the mount directory ...
> > Default
> > value, kill processes accessing mount point".
> > 
> > What does the configuration for the resources and constraints look
> > like? Based on what you described, Pacemaker shouldn't try to stop
> > the
> > Filesystem resource before successfully stopping the VM first.

-- 
Ken Gaillot 

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Why "Stop" action isn't called during failover?

2017-11-22 Thread Euronas Support
Thanks for the answer Ken,
The constraints are:

colocation vmgi_with_filesystem1 inf: vmgi filesystem1
colocation vmgi_with_libvirtd inf: vmgi cl_libvirtd
order vmgi_after_filesystem1 inf: filesystem1 vmgi
order vmgi_after_libvirtd inf: cl_libvirtd vmgi

On 20.11.2017 16:44:00 Ken Gaillot wrote:
> On Fri, 2017-11-10 at 11:15 +0200, Klecho wrote:
> > Hi List,
> > 
> > I have a VM, which is constraint dependant on its storage resource.
> > 
> > When the storage resource goes down, I'm observing the following:
> > 
> > (pacemaker 1.1.16 & corosync 2.4.2)
> > 
> > Nov 10 10:04:36 [1202] NODE-2pengine: info: LogActions:  
> > Leave   vm_lomem1   (Started NODE-2)
> > 
> > Filesystem(p_AA_Filesystem_Drive16)[2097324]: 2017/11/10_10:04:37
> > INFO: 
> > sending signal TERM to: libvirt+ 1160142   1  0 09:01 ?
> > Sl 0:07 qemu-system-x86_64
> > 
> > 
> > The VM (VirtualDomain RA) gets killed without calling "Stop" RA
> > action.
> > 
> > Isn't the proper way to call "Stop" for all related resources in such
> > cases?
> 
> Above, it's not Pacemaker that's killing the VM, it's the Filesystem
> resource itself.
> 
> When the Filesystem agent gets a stop request, if it's unable the
> unmount the filesystem, it can try further action according to its
> force_unmount option: "This option allows specifying how to handle
> processes that are currently accessing the mount directory ... Default
> value, kill processes accessing mount point".
> 
> What does the configuration for the resources and constraints look
> like? Based on what you described, Pacemaker shouldn't try to stop the
> Filesystem resource before successfully stopping the VM first.

-- 
EuroNAS GmbH
Germany:  +49 89 325 33 931

http://www.euronas.com
http://www.euronas.com/contact-us/

Ettaler Str. 3
82166 Gräfelfing / Munich
Germany

Registergericht : Amtsgericht München
Registernummer : HRB 181698
Umsatzsteuer-Identifikationsnummer (USt-IdNr.) : DE267136706


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Why "Stop" action isn't called during failover?

2017-11-20 Thread Ken Gaillot
On Fri, 2017-11-10 at 11:15 +0200, Klecho wrote:
> Hi List,
> 
> I have a VM, which is constraint dependant on its storage resource.
> 
> When the storage resource goes down, I'm observing the following:
> 
> (pacemaker 1.1.16 & corosync 2.4.2)
> 
> Nov 10 10:04:36 [1202] NODE-2pengine: info: LogActions:  
> Leave   vm_lomem1   (Started NODE-2)
> 
> Filesystem(p_AA_Filesystem_Drive16)[2097324]: 2017/11/10_10:04:37
> INFO: 
> sending signal TERM to: libvirt+ 1160142   1  0 09:01 ?
> Sl 0:07 qemu-system-x86_64
> 
> 
> The VM (VirtualDomain RA) gets killed without calling "Stop" RA
> action.
> 
> Isn't the proper way to call "Stop" for all related resources in such
> cases?

Above, it's not Pacemaker that's killing the VM, it's the Filesystem
resource itself.

When the Filesystem agent gets a stop request, if it's unable the
unmount the filesystem, it can try further action according to its
force_unmount option: "This option allows specifying how to handle
processes that are currently accessing the mount directory ... Default
value, kill processes accessing mount point".

What does the configuration for the resources and constraints look
like? Based on what you described, Pacemaker shouldn't try to stop the
Filesystem resource before successfully stopping the VM first.
-- 
Ken Gaillot 

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org