Andrew,
this is definitively bad and shouldn't ever happened (correct corosync
shutdown ends with Corosync Cluster Engine exiting with status -1 at
main.c:162). Sadly, I'm not able to reproduce what you describe on one
node with:
- Fedora12
- pacemaker from Fedora repository (no configuration, just added service
to corosync)
- Corosync trunk + my patch

Can you please send my your configuration of Pacemaker (or better full
configuration of what you are using). Or even better, do you think it is
possible to store somewhere VM (ideally compressed qcow2) with your
installation?

Regards,
  Honza

Andrew Beekhof wrote:
> On Wed, Jan 27, 2010 at 8:45 AM, Andrew Beekhof <[email protected]> wrote:
>> On Mon, Jan 25, 2010 at 4:12 PM, Jan Friesse <[email protected]> wrote:
>>> Andrew Beekhof wrote:
>>>> Can you describe what it does please.
>>>>
>>> Should ensure, that:
>>> - Service exit_fn is called before exit
>>> - all IPC using service is disconnected and wait to proper end of IPC thread
>>> -  service is correctly unlinked
>>>
>>> But what I think that Steve wants is to test, if pacemaker works
>>> correctly with this patch.
>> Yeah, but it helps to know what one is testing in order to know how
>> best to test it.
>> Running with trunk + your new patch now...
> 
> Should logging be affected?
> I'm frequently seeing nothing at all from corosync after
> 
> Jan 27 12:56:04 pcmk-4 corosync[1878]:   [pcmk  ] notice:
> pcmk_shutdown: Shutdown complete
> Jan 27 12:56:04 pcmk-4 corosync[1878]:   [SERV  ] Service engine
> unloaded: Pacemaker Cluster Manager 1.1.0
> 
> There's no cores (ulimit -c is set to unlimited) and the processes are
> gone, so its unclear if there is a problem.

_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais

Reply via email to