This is the same patch we applied for Whitetank.
It ensures any generated nodeid is a positive signed integer to make
the kernel/dlm/ocfs2 happier.
Please ACK/NACK
high_bit.diff
Description: Binary data
___
Openais mailing list
This is a re-post of an earlier patch that decouples shutdown/startup
order from the objdb order.
This is needed as the objdb order will change as modules are loaded/
unloaded and is also set up to unload non-default services last (which
is the opposite of what something like Pacemaker
Andrew Beekhof wrote:
This is a re-post of an earlier patch that decouples shutdown/startup
order from the objdb order.
This is needed as the objdb order will change as modules are
loaded/unloaded and is also set up to unload non-default services last
(which is the opposite of what something
Some minor fixes that allow building on OSX.
Please ACK/NACK
-- Andrew
osx.diff
Description: Binary data
___
Openais mailing list
Openais@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/openais
Ryan O'Hara wrote:
On Tue, Jun 02, 2009 at 05:58:46PM +0200, Jan Friesse wrote:
Main problem is,
that sometimes message_handler_req_lib_lck_resourcelock(async) ask to
deleted handle. So this patch test result of hdb_get... and if it is not
0, it should return error to caller - ipc send. So
Andrew Beekhof wrote:
Some minor fixes that allow building on OSX.
Please ACK/NACK
It works for me.
ACK
--
Chrissie
___
Openais mailing list
Openais@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/openais
At one time Solaris worked, but nobody has done a re-port in the
community since we moved to automake. I attempted to install Solaris
x86 in a KVM virtual machine and it just locks up on startup.
If anyone else has a solaris machine to try out porting it would be
helpful. I would really prefer
When closing a resource via saLckResourceClose, we must finalize
(destroy) the lckResourceHandle. Without this we'll won't get
BAD_HANDLE errors if we close a resource and then do an operation on
that resource (ie. saLckResourceLock).
I believe this solves the problem that Jan Friesse was seeing
Pretty sure this is a sync bug and I am working on it. If you could
print out the sync state at time of lockup in corosync that would verify
it, or you could try my sync patch when I get it wrapped up.
Regards
-steve
On Wed, 2009-06-03 at 16:28 -0500, David Teigland wrote:
Running cpgx -d1 on
On Wed, Jun 03, 2009 at 04:28:27PM -0500, David Teigland wrote:
Running cpgx -d1 on four nodes, where -d1 causes the test to periodically
kill and restart corosync. When this kill/restart happens on one node, others
are typically exiting/joining the cpg during at the same time. The result is
10 matches
Mail list logo