Hi Victor,
Victor a écrit :
[I tried to add this to Jira, but Jira throws me an error :) ]
Hmm, Jira is tired this morning. I also got some errors. You have to insist.
Emmanuel, I remember a bug where ConcurrentLinkedQueue$Node was the
main actor :) It is DIRMINA-709.
When I investigated it, I have seen that GC was busy all the time and
there were tens of millions of ConcurrentLinkedQueue$Node objects,
they were allocated and released frequently. I tried to profile our
server with YourKit profiler... without success because of high load
(it was in production).
I see the CLQ$Node objects accumulating on the test, but they get
garbage collected when GC kick in. However, I don't know why they are
present, as they should have been removed as soon as they have been handled.
Then I prepared my own "profiling tool" for this concrete problem. It
uses AspectJ - I have added an aspect for Queue.offer() and
Collection.add() method executions and grabbed most popular
stack-traces from where these methods were called. If necessary, I can
share my "tool" here.
I'm not sure we will go with Aspect-J in MINA, but I'm wondering if
those are not good candidates for JMX counters.
Anyway, DIRMINA-762 seems to me a different beast. Further investigation
I have done last evening were quite interesting and puzzling too:
- after a while running the client, even if it's an infinite loop, it
looks like only 3 threads receive data when all the 61 others are just
doing nothing. It's like they are dead, but in RUNABLE state !
- another interesting thing : as I only have 3 NioProcessor to process
all the load, I have added an executorFilter in the chain, and what I
see is absolutely scarry : every time you launch some new clients, as
many threads are created on the server *and never removed or reused*.
Even if you stop the clients. It's like those threads are dead and useless.
Ok, I may need some coffee here, I have to rerun the tests now that I
got some sleep, but I find those things a bit annoying. I will
investigate more today.
--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com