I'm working on an app where I need to send a high volume of messages from a user to a muc room (group of rooms really). It looks like something is limiting the rate of messages into rooms though. I can only seem to get one in every couple of seconds, messages sent in between that interval get dropped.
I've done everything I can see to turn off karma and rate limiting in the jabber.xml and jadc2s.xml config files: I commented out all the karma tags in jabber.xml, and I set maxbps=0 in jadc2s.xml.
I've been testing this with a test client app that creates a room, then joins it, then starts sending numbered messages. For each open jabber connection to the room, my test app waits to get a response before sending the next message. I keep Exodus open in the same room with the debug window open to see when the messages actually show up. They seem to come about every 3 seconds spaced apart pretty evenly; as I mentioned earlier, messages sent in between that interval get dropped. To rule out client issues I've tried running this test with multiple threads in the same process, and by starting multiple single-threaded processes and I get the same result in both instances. I also tried running the test client from two different boxes to see if jabber is limiting me by IP, and that also doesn't seem to make a difference.
So I have a few questions:
1) Anyone know straight off if this is rate limiting and how I can turn it off?
2) If there's no clear answer what jabber logging options can I turn on to get a better info? What I see in the logs right now is pretty minimal and not helpful for my issue.
3) Any other ideas why I might be seeing this effect?
The versions I'm running are:
OS: linux 2.6.9-11
jabberd: 1.4.3
jadc2s: 0.9.0
muc: 0.6.0
Here's the io section of my jadc2s.xml:
<io>
<max_fds>9973</max_fds>
<max_bps>0</max_bps>
<connection_limits>
<connects>0</connects>
<seconds>0</seconds>
</connection_limits>
</io>
...like I mentioned above all the <karma> elements in my jabber.xml are commented out.
Any help with this is greatly greatly appreciated.
Thanks in advance,
Larry Kirschner
MTVNetworks
