At the moment, it is possible for one cpg client to send a bunch of
messages.  If a receiving client is slow, the receiver will eventually
oom.  The only time this is a problem is on dispatch of messages.

I have tried for a long time to sort this problem out and think I have
finally come out with a workable solution.

The ipcs will keep track of its memory used in queueing messages.  When
it reaches a maximum threshold, it will call a callback registered with
the api indicating "at maximum memory usage".  When it drops below a
certain threshold, it will execute a callback indicating "it is now safe
to queue again".

These control callbacks coming out of coroipcs will feed into a service
(but not engine) using totempg.  When coroipcs delivers the max memory
usage it will send a totem message indicating it is at maximum memory
usage.  When coroipcs delivers the free to queue message, it will send a
totem message indicating it is able to queue.

This service will then allow for delivery of these totem messages
internally (as sync/syncv2 do today).  The service will keep a list of
nodes (flow control is on, or off).  If any node in the list has flow
control on, the local node will shut off incoming ipc requests via
TRY_AGAIN.

Regards
-steve

_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais

Reply via email to