Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-03-28 Thread Andrew Beekhof
On Thu, Mar 29, 2018 at 6:41 AM, Jehan-Guillaume de Rorthais <
j...@dalibo.com> wrote:

> On Wed, 28 Mar 2018 12:40:25 -0500
> Ken Gaillot  wrote:
>
> > Hi all,
> >
> > Andrew Beekhof brought up a potential change to help with reading
> > Pacemaker logs.
> >
> > Currently, pacemaker daemon names are not intuitive, making it
> > difficult to search the system log or understand what each one does.
> >
> > The idea is to rename the daemons, with a common prefix, and a name
> > that better reflects the purpose.
> >
> > I think it's a great idea, but we have to consider two drawbacks:
> >
> > * I'm about to release 2.0.0-rc2, and it's late in the cycle for a
> > major change. But if we don't do it now, it'll probably sit on the back
> > burner for a few years, as it wouldn't make sense to introduce such a
> > change shortly after a major bump.
> >
> > * We can change *only* the names used in the logs, which will be
> > simple, but give us inconsistencies with what shows up in "ps", etc. Or
> > we can try to change everything -- process names, library names, API
> > function/structure names -- but that will impact other projects such as
> > sbd, crmsh, etc., potentially causing compatibility headaches.
> >
> > What are your thoughts? Change or not?
>
> change
>
> > Now or later?
>
> I'm not sure how much work it involves during rc time... But I would pick
> now
> if possible.
>
> > Log tags, or everything?
>
> Everything.
>
> I'm from the PostgreSQL galaxy. In this galaxy, parameter
> "update_process_title" controls if PostgreSQL should set human readable
> process
> title and is "on" by default. In ps, it gives:
>
>   ioguix@firost:~$ ps f -o cmd -C postmaster
>   CMD
>   postmaster -D /home/ioguix/var/lib/pgsql/96
>\_ postgres: checkpointer process
>\_ postgres: writer process
>\_ postgres: wal writer process
>\_ postgres: autovacuum launcher process
>\_ postgres: stats collector process
>
> Some process might even add some useful information about their status,
> eg.:
> current replication status and location (process wal receiver/sender) or
> last
> WAL archived (process archiver).
>
> In source code, it boils down to this function:
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=
> blob;f=src/backend/utils/misc/ps_status.c;h=5742de0802a54e38a2c2e3cfa8e8f4
> 45b6822883;hb=65c6b53991e1c56f6a0700ae26928962ddf2b9fe#l321


sbd also has similar code


>
>
> > And the fun part, what would we change them to ...
> >
> > Beekhof suggested renaming "pengine" to "cluster-planner", as an
> > example.
> >
> > I think a prefix indicating pacemaker specifically would be better than
> > "cluster-" for grepping and intuitiveness.
> >
> > For intuitiveness, long names are better ("pacemaker-FUNCTION"). On the
> > other hand, there's an argument for keeping names to 15 characters,
> > which is the default "ps" column width, and a reasonable limit for log
> > line tags. Maybe "pm-" or "pcmk-"? This prefix could also be used for
> > library names.
>
> "pcmk-*" sounds better to me. "cluster" has so many different definiion in
> people mind...
>
> > Looking at other projects with server processes, most use the
> > traditional "d" ending (for example, "rsyslogd"). A few add "-daemon"
> > ("rtkit-daemon"), and others don't bother with any suffix ("gdm").
> >
> > Here are the current names, with some example replacements:
> >
> >  pacemakerd: PREFIX-launchd, PREFIX-launcher
>
> pacemakerd, alone, sounds perfectly fine to me.
>
> >  attrd: PREFIX-attrd, PREFIX-attributes
>
> PREFIX-attributes
>

PREFIX-keystored ?


>
> >  cib: PREFIX-configd, PREFIX-state
>
> Tricky...It deals with both config and state...
>

PREFIX-datastore ?


>
> By the way, why in the first place the CIB messages must be in the log
> file?
>

Because its the only record of what changed and when.
Which is almost never important, except for those times when the
information is critical.
Which describes almost all of pacemaker's logging unfortunately.

On a related topic, I think if we made file logging mandatory then we could
move a lot more logs (including most of the cib logs) out of syslog.


> Filling logs with XML diffs resulting from other actions already logged
> earlier
> sounds like duplicated informations.
>

We generally only log configuration changes - which aren't logged in any
other form.


>
> I suppose most of the CIB logging messages could be set in debug level or
> replaced by a simple "cib updated with new {setup|status}"?
>
> >  crmd: PREFIX-controld, PREFIX-clusterd, PREFIX-controller
>
> PREFIX-controld
>
> >  lrmd: PREFIX-locald, PREFIX-resourced, PREFIX-runner
>
> PREFIX-executord? PREFIX-execd ?
>

PREFIX-executiond


> >  pengine: PREFIX-policyd, PREFIX-scheduler
>
> PREFIX-schedulerd
>
> > stonithd: PREFIX-fenced, PREFIX-stonithd, PREFIX-executioner
>
> I always disliked the STONITH acronym. For the same reasons, I
> dislike "executioner" as well. Moreover, some people might think it
> 

Re: [ClusterLabs] symmetric-cluster=false doesn't work

2018-03-28 Thread George Melikov
Thank you for clarification! 
I think you're right, our last config doesn't have any problem with asymmetric 
work.

26.03.2018, 22:37, "Ken Gaillot" :
> On Tue, 2018-03-20 at 22:03 +0300, George Melikov wrote:
>>  Hello,
>>
>>  I tried to create an asymmetric cluster via property symmetric-
>>  cluster=false , but my resources try to start on any node, though I
>>  have set locations for them.
>>
>>  What did I miss?
>>
>>  cib: https://pastebin.com/AhYqgUdw
>>
>>  Thank you for any help!
>>  
>>  Sincerely,
>>  George Melikov
>
> That output looks fine -- the resources are started only on nodes where
> they are allowed. What are you expecting to be different?
>
> Note that resources will be *probed* on every node (a one-time monitor
> action to check whether they are already running there), but they
> should only be *started* on allowed nodes.
> --
> Ken Gaillot 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-03-28 Thread Jehan-Guillaume de Rorthais
On Wed, 28 Mar 2018 12:40:25 -0500
Ken Gaillot  wrote:

> Hi all,
> 
> Andrew Beekhof brought up a potential change to help with reading
> Pacemaker logs.
> 
> Currently, pacemaker daemon names are not intuitive, making it
> difficult to search the system log or understand what each one does.
> 
> The idea is to rename the daemons, with a common prefix, and a name
> that better reflects the purpose.
> 
> I think it's a great idea, but we have to consider two drawbacks:
> 
> * I'm about to release 2.0.0-rc2, and it's late in the cycle for a
> major change. But if we don't do it now, it'll probably sit on the back
> burner for a few years, as it wouldn't make sense to introduce such a
> change shortly after a major bump.
> 
> * We can change *only* the names used in the logs, which will be
> simple, but give us inconsistencies with what shows up in "ps", etc. Or
> we can try to change everything -- process names, library names, API
> function/structure names -- but that will impact other projects such as
> sbd, crmsh, etc., potentially causing compatibility headaches.
> 
> What are your thoughts? Change or not? 

change

> Now or later?

I'm not sure how much work it involves during rc time... But I would pick now
if possible.

> Log tags, or everything?

Everything.

I'm from the PostgreSQL galaxy. In this galaxy, parameter
"update_process_title" controls if PostgreSQL should set human readable process
title and is "on" by default. In ps, it gives:

  ioguix@firost:~$ ps f -o cmd -C postmaster
  CMD
  postmaster -D /home/ioguix/var/lib/pgsql/96
   \_ postgres: checkpointer process   
   \_ postgres: writer process   
   \_ postgres: wal writer process   
   \_ postgres: autovacuum launcher process   
   \_ postgres: stats collector process   

Some process might even add some useful information about their status, eg.:
current replication status and location (process wal receiver/sender) or last
WAL archived (process archiver).

In source code, it boils down to this function:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/utils/misc/ps_status.c;h=5742de0802a54e38a2c2e3cfa8e8f445b6822883;hb=65c6b53991e1c56f6a0700ae26928962ddf2b9fe#l321

> And the fun part, what would we change them to ...
> 
> Beekhof suggested renaming "pengine" to "cluster-planner", as an
> example.
> 
> I think a prefix indicating pacemaker specifically would be better than
> "cluster-" for grepping and intuitiveness.
> 
> For intuitiveness, long names are better ("pacemaker-FUNCTION"). On the
> other hand, there's an argument for keeping names to 15 characters,
> which is the default "ps" column width, and a reasonable limit for log
> line tags. Maybe "pm-" or "pcmk-"? This prefix could also be used for
> library names.

"pcmk-*" sounds better to me. "cluster" has so many different definiion in
people mind...

> Looking at other projects with server processes, most use the
> traditional "d" ending (for example, "rsyslogd"). A few add "-daemon"
> ("rtkit-daemon"), and others don't bother with any suffix ("gdm").
> 
> Here are the current names, with some example replacements:
> 
>  pacemakerd: PREFIX-launchd, PREFIX-launcher

pacemakerd, alone, sounds perfectly fine to me.

>  attrd: PREFIX-attrd, PREFIX-attributes

PREFIX-attributes

>  cib: PREFIX-configd, PREFIX-state

Tricky...It deals with both config and state...

By the way, why in the first place the CIB messages must be in the log file?
Filling logs with XML diffs resulting from other actions already logged earlier
sounds like duplicated informations.

I suppose most of the CIB logging messages could be set in debug level or
replaced by a simple "cib updated with new {setup|status}"?

>  crmd: PREFIX-controld, PREFIX-clusterd, PREFIX-controller

PREFIX-controld

>  lrmd: PREFIX-locald, PREFIX-resourced, PREFIX-runner

PREFIX-executord? PREFIX-execd ?

>  pengine: PREFIX-policyd, PREFIX-scheduler

PREFIX-schedulerd

> stonithd: PREFIX-fenced, PREFIX-stonithd, PREFIX-executioner

I always disliked the STONITH acronym. For the same reasons, I
dislike "executioner" as well. Moreover, some people might think it actually
"executes" some process in the sense of "running".

I would definitely vote for PREFIX-fenced

>  pacemaker_remoted: PREFIX-remoted, PREFIX-remote

PREFIX-remoted


My 2¢,
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-03-28 Thread Ken Gaillot
Hi all,

Andrew Beekhof brought up a potential change to help with reading
Pacemaker logs.

Currently, pacemaker daemon names are not intuitive, making it
difficult to search the system log or understand what each one does.

The idea is to rename the daemons, with a common prefix, and a name
that better reflects the purpose.

I think it's a great idea, but we have to consider two drawbacks:

* I'm about to release 2.0.0-rc2, and it's late in the cycle for a
major change. But if we don't do it now, it'll probably sit on the back
burner for a few years, as it wouldn't make sense to introduce such a
change shortly after a major bump.

* We can change *only* the names used in the logs, which will be
simple, but give us inconsistencies with what shows up in "ps", etc. Or
we can try to change everything -- process names, library names, API
function/structure names -- but that will impact other projects such as
sbd, crmsh, etc., potentially causing compatibility headaches.

What are your thoughts? Change or not? Now or later? Log tags, or
everything?

And the fun part, what would we change them to ...

Beekhof suggested renaming "pengine" to "cluster-planner", as an
example.

I think a prefix indicating pacemaker specifically would be better than
"cluster-" for grepping and intuitiveness.

For intuitiveness, long names are better ("pacemaker-FUNCTION"). On the
other hand, there's an argument for keeping names to 15 characters,
which is the default "ps" column width, and a reasonable limit for log
line tags. Maybe "pm-" or "pcmk-"? This prefix could also be used for
library names.

Looking at other projects with server processes, most use the
traditional "d" ending (for example, "rsyslogd"). A few add "-daemon"
("rtkit-daemon"), and others don't bother with any suffix ("gdm").

Here are the current names, with some example replacements:

 pacemakerd: PREFIX-launchd, PREFIX-launcher

 attrd: PREFIX-attrd, PREFIX-attributes

 cib: PREFIX-configd, PREFIX-state

 crmd: PREFIX-controld, PREFIX-clusterd, PREFIX-controller

 lrmd: PREFIX-locald, PREFIX-resourced, PREFIX-runner

 pengine: PREFIX-policyd, PREFIX-scheduler

 stonithd: PREFIX-fenced, PREFIX-stonithd, PREFIX-executioner

 pacemaker_remoted: PREFIX-remoted, PREFIX-remote

-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: crm shell 2.1.2 manual bug?

2018-03-28 Thread Andrei Borzenkov
28.03.2018 15:12, Ulrich Windl пишет:
> 
> I was hoping "colocation ... ( B C D ) A" to be a shortcut of "colocation ...
> B A", "colocation ... C A", "colocation ... D A"...
> 

Well, my understanding is that "colocation { B C D } { A }" should do
exactly that.
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: crm shell 2.1.2 manual bug?

2018-03-28 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 28.03.2018 um 12:39 in
Nachricht :
> 28.03.2018 13:25, Ulrich Windl пишет:
>> Hi!
>> 
>> For crmsh-2.1.2+git132.gbc9fde0-18.2 I think there's a bug in the manual 
> describing resource sets:
>> 
>>sequential
>>If true, the resources in the set do not depend on each other 
> internally. Setting sequential to true implies a strict order of dependency

> within the set.
>> 
>> Obviously "true" cannot mean both: "do not depend" and "depend". My guess
is 
> that the first true has to be false.
>> 
>> In addition, isn't the syntax with resource sets missing something (despite

> from the description)?:
>> 
>> colocation  : resource_sets
>>  [node-attribute=]
>> 
>> comparing to
>> 
>> colocation  : [:] [:]
>>  [node-attribute=]
>> 
>> I expected the with-rsc to be at the end also when using resource sets...
>> 
> 
> No, colocating sets works indeed differently, syntax is correct (for XML
> examples in pacemaker explained that is ...)
> 
>> I came across this when trying to add a colocation like this:
>> colocation col_LV inf:( cln_LV cln_LV-L1 cln_LV-L2 cln_ML cln_ML-L1 
> cln_ML-L2 ) cln_VMs
>> 
>> crm complained about this:
>> ERROR: 1: syntax in role: Unmatched opening bracket near 
parsing 
> 'colocation ...'
>> ERROR: 2: syntax: Unknown command near  parsing 'cln_ml-l2 ) 
> cln_VMs'
>> (note the lower case)
>> 
> 
> You need to create set consisting of single resource. I do not know if
> it possible to colocate set and single resource - none of examples show it.

I was hoping "colocation ... ( B C D ) A" to be a shortcut of "colocation ...
B A", "colocation ... C A", "colocation ... D A"...

Regards,
Ulrich


> 
> See also
> https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Pacemaker_Exp

> lained/s-resource-sets-colocation.html
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 



___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crm shell 2.1.2 manual bug?

2018-03-28 Thread Kristoffer Grönlund
"Ulrich Windl"  writes:

> Hi!
>
> For crmsh-2.1.2+git132.gbc9fde0-18.2 I think there's a bug in the manual 
> describing resource sets:
>
>sequential
>If true, the resources in the set do not depend on each other 
> internally. Setting sequential to true implies a strict order of dependency 
> within the set.
>
> Obviously "true" cannot mean both: "do not depend" and "depend". My guess is 
> that the first true has to be false.

Right, "do not depend" should be "depend" there. Thanks for catching it :)

> I came across this when trying to add a colocation like this:
> colocation col_LV inf:( cln_LV cln_LV-L1 cln_LV-L2 cln_ML cln_ML-L1 cln_ML-L2 
> ) cln_VMs
>
> crm complained about this:
> ERROR: 1: syntax in role: Unmatched opening bracket near  parsing 
> 'colocation ...'
> ERROR: 2: syntax: Unknown command near  parsing 'cln_ml-l2 ) 
> cln_VMs'
> (note the lower case)

The problem reported is that there is no space between "inf:" and "(" -
the parser in crmsh doesn't handle missing spaces between tokens right
now.

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crm shell 2.1.2 manual bug?

2018-03-28 Thread Andrei Borzenkov
28.03.2018 13:25, Ulrich Windl пишет:
> Hi!
> 
> For crmsh-2.1.2+git132.gbc9fde0-18.2 I think there's a bug in the manual 
> describing resource sets:
> 
>sequential
>If true, the resources in the set do not depend on each other 
> internally. Setting sequential to true implies a strict order of dependency 
> within the set.
> 
> Obviously "true" cannot mean both: "do not depend" and "depend". My guess is 
> that the first true has to be false.
> 
> In addition, isn't the syntax with resource sets missing something (despite 
> from the description)?:
> 
> colocation  : resource_sets
>  [node-attribute=]
> 
> comparing to
> 
> colocation  : [:] [:]
>  [node-attribute=]
> 
> I expected the with-rsc to be at the end also when using resource sets...
> 

No, colocating sets works indeed differently, syntax is correct (for XML
examples in pacemaker explained that is ...)

> I came across this when trying to add a colocation like this:
> colocation col_LV inf:( cln_LV cln_LV-L1 cln_LV-L2 cln_ML cln_ML-L1 cln_ML-L2 
> ) cln_VMs
> 
> crm complained about this:
> ERROR: 1: syntax in role: Unmatched opening bracket near  parsing 
> 'colocation ...'
> ERROR: 2: syntax: Unknown command near  parsing 'cln_ml-l2 ) 
> cln_VMs'
> (note the lower case)
> 

You need to create set consisting of single resource. I do not know if
it possible to colocate set and single resource - none of examples show it.

See also
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-sets-colocation.html
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] crm shell 2.1.2 manual bug?

2018-03-28 Thread Ulrich Windl
Hi!

For crmsh-2.1.2+git132.gbc9fde0-18.2 I think there's a bug in the manual 
describing resource sets:

   sequential
   If true, the resources in the set do not depend on each other 
internally. Setting sequential to true implies a strict order of dependency 
within the set.

Obviously "true" cannot mean both: "do not depend" and "depend". My guess is 
that the first true has to be false.

In addition, isn't the syntax with resource sets missing something (despite 
from the description)?:

colocation  : resource_sets
 [node-attribute=]

comparing to

colocation  : [:] [:]
 [node-attribute=]

I expected the with-rsc to be at the end also when using resource sets...

I came across this when trying to add a colocation like this:
colocation col_LV inf:( cln_LV cln_LV-L1 cln_LV-L2 cln_ML cln_ML-L1 cln_ML-L2 ) 
cln_VMs

crm complained about this:
ERROR: 1: syntax in role: Unmatched opening bracket near  parsing 
'colocation ...'
ERROR: 2: syntax: Unknown command near  parsing 'cln_ml-l2 ) cln_VMs'
(note the lower case)

Regards,
Ulrich


___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: PSA: pvmove -i0 and LVM resource

2018-03-28 Thread Ulrich Windl
>>> "Hayden,Robert"  schrieb am 27.03.2018 um 20:12 in
Nachricht
:

> Thought I would share an experience with the community.  We have RHEL 7.4 
> clusters that uses the heartbeat LVM resource (HA-LVM volume group).  The LVM 
> resource does a "vgscan --cache" command as part of its monitoring routine.
> 
> We have found that the pvmove command option "-i0" will block the vgscan 
> command (most likely any LVM command).  The pvmove command just needs to be 
> executed on any physical volume and not specifically on one being managed by 
> RHCS.   In our case, the node where the pvmove was being executed was evicted 
> from the cluster.

Hi!

Some random thoughts on this:

1) pvmove being I/O intensive is best done during some maintenace interval. 
Such operation is needed verey rarely.

2) AFAIK Linux LVM's pvmove does the whole move as one transaction (as opposed 
to former HP-UX LVM, where is was done block-wise. That means the relocated 
block was being used as soon as it was moved). So maybe that's the reason while 
LVM is seemingly unable to perform another command while pvmove is being active.

3) I think pvmove does not really care to which LVs the PEs (Physical Extents) 
belong while moving.

4) Monitoring the state of LVM via vgscan is probably a bad idea (in HP-UX you 
only needed vgscan if you wanted to "import" a VG or PVs already present on a 
disk, but unknown to the OS)

5) Maybe somthing like /proc/lvmstat (like /proc/mdstat) is needed.

Sorry if I couldn't provide a silver bullet...

Regards,
Ulrich

> 
> Blocking Command:   pvmove -v -i0 -n /dev/testvg/testlv00 /dev/mapper/mpathd1 
> /dev/mapper/mpaths1
> 
> When testing without the -i0 option or with the -iX where X is non-zero, the 
> pvmove did not block vgscan commands.
> 
> Associated errors in /var/log/messages:
> 
> Mar 26 14:03:27 nodeapp1 lvmpolld: W: LVMPOLLD: polling for output of the 
> lvm cmd (PID 74134) has timed out
> 
> 
> Mar 26 14:04:27 nodeapp1 lvmpolld: W: LVMPOLLD: polling for output of the 
> lvm cmd (PID 74134) has timed out
> Mar 26 14:04:32 nodeapp1 lrmd[81636]: warning: share1_vg_monitor_6 
> process (PID 77254) timed out
> Mar 26 14:04:32 nodeapp1 lrmd[81636]: warning: share1_vg_monitor_6:77254 
> - timed out after 9ms
> Mar 26 14:04:32 nodeapp1 crmd[81641]:   error: Result of monitor operation 
> for share1_vg on nodeapp1: Timed Out
> Mar 26 14:04:32 nodeapp1 crmd[81641]:  notice: State transition S_IDLE -> 
> S_POLICY_ENGINE
> 
> 
> Mar 26 14:05:27 nodeapp1 LVM(share1_vg)[88723]: INFO: 0 logical volume(s) in 
> volume group "share1vg" now active
> Mar 26 14:05:27 nodeapp1 lvmpolld: W: LVMPOLLD: polling for output of the 
> lvm cmd (PID 74134) has timed out
> Mar 26 14:05:27 nodeapp1 lvmpolld[74130]: LVMPOLLD: LVM2 cmd is unresponsive 
> too long (PID 74134) (no output for 180 seconds)
> 
> 
> Mar 26 14:05:55 nodeapp1 lrmd[81636]: warning: share1_vg_stop_0 process (PID 
> 88723) timed out
> Mar 26 14:05:55 nodeapp1 lrmd[81636]: warning: share1_vg_stop_0:88723 - timed 
> out after 3ms
> Mar 26 14:05:55 nodeapp1 crmd[81641]:   error: Result of stop operation for 
> share1_vg on nodeapp1: Timed Out
> Mar 26 14:05:55 nodeapp1 crmd[81641]: warning: Action 6 (share1_vg_stop_0) 
> on nodeapp1 failed (target: 0 vs. rc: 1): Error
> Mar 26 14:05:55 nodeapp1 crmd[81641]:  notice: Transition aborted by 
> operation share1_vg_stop_0 'modify' on nodeapp1: Event failed
> Mar 26 14:05:55 nodeapp1 crmd[81641]: warning: Action 6 (share1_vg_stop_0) 
> on nodeapp1 failed (target: 0 vs. rc: 1): Error
> 
> 
> Mar 26 14:05:55 nodeapp1 pengine[81639]: warning: Processing failed op stop 
> for share1_vg on nodeapp1: unknown error (1)
> Mar 26 14:05:55 nodeapp1 pengine[81639]: warning: Processing failed op stop 
> for share1_vg on nodeapp1: unknown error (1)
> Mar 26 14:05:55 nodeapp1 pengine[81639]: warning: Cluster node nodeapp1 will 
> be fenced: share1_vg failed there
> Mar 26 14:05:55 nodeapp1 pengine[81639]: warning: Forcing share1_vg away 
> from nodeapp1 after 100 failures (max=100)
> Mar 26 14:05:55 nodeapp1 pengine[81639]: warning: Scheduling Node nodeapp1 
> for STONITH
> 
> Hope this helps someone down the line.
> 
> 
> Robert
> 
> 
> 
> CONFIDENTIALITY NOTICE This message and any included attachments are from 
> Cerner Corporation and are intended only for the addressee. The information 
> contained in this message is confidential and may constitute inside or 
> non-public information under international, federal, or state securities 
> laws. Unauthorized forwarding, printing, copying, distribution, or use of 
> such information is strictly prohibited and may be unlawful. If you are not 
> the addressee, please promptly delete this message and notify the sender of 
> the delivery error by e-mail or you may call Cerner's corporate offices in 
> Kansas City, Missouri, U.S.A at (+1) (816)221-1024.