Andrew Beekhof wrote:
Running CTS on 6 nodes has shown MAXMSG to be too small - the PE cannot send its transition graph and the cluster stalls indefinitely.

So, that means the CIB is > 256K compressed?  Or is it > 256K uncompressed?

We could increase the value but looking through the code this seems to be an artificial limitation to various degrees...

* In some cases its used as a substitute for get_netstringlen(msg) - I believe these should be fixed

* In some cases its used to pre-empt checks by "child" functions - I believe these should be removed.

The two cases that seem to legitimately use MAXMSG are the HBcomm plugins and the decompression code (though even that could retry a "couple" of time with larger buffers).


Alan, can you please take a look at the use of MAXMSG in the IPC layer which is really not my area of expertise (especially the HBcomm plugins) and verify that my assessment is correct (and possibly get someone to look at fixing it).

Unfortunately, this means various buffers get locked into memory at this size. Our processes are already pretty huge. get_netstringlen() is an expensive call.

Why do you think that predicting that child buffers will be too large is a bad idea? How do you understand that removing it will help?

Is your concern related to compressed/uncompressed sizes?


--
    Alan Robertson <[EMAIL PROTECTED]>

"Openness is the foundation and preservative of friendship... Let me claim from you at all times your undisguised opinions." - William Wilberforce
_______________________________________________________
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/

Reply via email to