Hi everyone, very interesting discussion - we are facing very similar symptoms during our load tests currently.
Setup: 2 hosts connected with GB network Gatling (host1) pushes 1500 Messages/sec (each ~4KB in size) into our server (host 2) The server runs Akka 2.3-RC1. Akka-remote (not Cluster) is used. The server side calculations are largely CPU-bound and CPU usage is at almost 100% (4 cores). The heap also seems large enough. After approx 3 minutes, the communication between gatling and our server breaks and is not re-established again until 2 or 3 minutes later. We have approx 310 actors running on 2 different dispatchers (default and 1 additional) - I cannot tell how many futures. Interestingly, the log about the actor getting gated for 5000ms appears very late - shortly before resuming functionality. The work packages are separate so interruptions and thread-reassignment should be easily possible. - From that perspective, I cannot see anyone hogging threads... Our dispatcher settings for no-of-threads and throughput also seem reasonable. - We've tested several configurations and it seems that the system is most reactive with the standard dispatcher settings. @Patrik Maybe these figures can help you. We are still looking for the source of the issue somewhere in our app... cheers, -Tom Am Donnerstag, 8. August 2013 22:18:03 UTC+2 schrieb Patrik Nordwall: > > > > > On tors, aug 8, 2013 at 8:56 em, Jason Kolb < > [email protected]="mailto:[email protected]">> wrote: > >> Hi Patrick, >> >> >>> It is probably as Viktor points out, but I would be interested in >>> collecting another user experience in this area. Can you describe what your >>> workers are doing? How many actors are active during the 20 min? How many >>> messages? How long does the processing of the messages take? >>> >> >> They're doing a lot of matrix multiplication and statistics collection. >> I'm not sure on the number of messages but I'd say on the order of 1 >> million. It doesn't take very long per message, but in aggregate it takes a >> really long time. >> > > and number of actors? > > >> >> >>> You are not using akka-cluster, right? Then I'm a surprised that you say >>> that disassociate prevents any further communication, because a new >>> connection should be established. >>> >> >> The actor system that's taking a long time is clustered, but the one >> that's timing out is not (it's only remoting-enabled). I set up two actor >> systems and use the remoting one for communication with nodes outside the >> cluster, the clustered actor system does the heavy lifting. >> >> >>> /Patrik >>> >>> >>> On Wed, Aug 7, 2013 at 7:46 PM, √iktor Ҡlang <[email protected]>wrote: >>> >>>> Hi, >>>> >>>> Doing computationally heavy isn't the problem. Hogging threads is. >>>> >>>> Change to either dedicated dispatcher and switch to message-passing >>>> recursion to give other actors a chance to run, then tune thw throughput >>>> setting for fairness. >>>> >>>> Cheers, >>>> V >>>> On Aug 7, 2013 7:18 PM, "Jason Kolb" <[email protected]> wrote: >>>> >>>>> I have a scenario where two actor systems are talking via remoting >>>>> (using TCP). When one of the actor system does a very computationally >>>>> heavy >>>>> operation (100% CPU for ~20 minutes) the other system disassociates from >>>>> it >>>>> which prevents any further communication. I assume this is because the >>>>> heartbeat mechanism is not functioning properly. >>>>> >>>>> Is there a way to either disable the disassociation mechanism or >>>>> temporarily set it to such a high timeout that it can go 20-30 minutes >>>>> without a heartbeat? >>>>> >>>>> Thanks, >>>>> Jason >>>>> >>>>> -- >>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/ >>>>> >>>>>>>>>> Search the archives: >>>>> https://groups.google.com/group/akka-user >>>>> --- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "Akka User List" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> To post to this group, send email to [email protected]. >>>>> Visit this group at http://groups.google.com/group/akka-user. >>>>> For more options, visit https://groups.google.com/groups/opt_out. >>>>> >>>>> >>>>> >>>> -- >>>> >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/ >>>> >>>>>>>>>> Search the archives: >>>> https://groups.google.com/group/akka-user >>>> --- >>>> You received this message because you are subscribed to the Google >>>> Groups "Akka User List" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To post to this group, send email to [email protected]. >>>> Visit this group at http://groups.google.com/group/akka-user. >>>> For more options, visit https://groups.google.com/groups/opt_out. >>>> >>>> >>>> >>> >>> -- >> >>>>>>>>>> Read the docs: http://akka.io/docs/ >> >>>>>>>>>> Check the FAQ: http://akka.io/faq/ >> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user >> --- >> You received this message because you are subscribed to the Google Groups >> "Akka User List" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> To post to this group, send email to [email protected]<javascript:> >> . >> Visit this group at http://groups.google.com/group/akka-user. >> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> > -- >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>>>>>>> Check the FAQ: http://akka.io/faq/ >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/groups/opt_out.
