Hello guys, Dan's suggestion is indeed a great one. Java serialization might bight you when you least expect it, and that can happen right away when you start using remoting.
We will aim to remove the java-serializer as configured by default serializer in future versions of Akka so that people do not get this not to nice suprise (well, at least then you know you opted-in into java serialization). I'd recommend using a serialization format that takes into account that performance matters (can be protobuf, capnproto, kryo, cbor or anything else you fancy). If you still would require specialized help from our end to look into this, we do offer consulting and dev subs, then we could dig into your code in depth. Happy hakking! -- Konrad On Thu, Oct 16, 2014 at 9:56 AM, Daniel Stoner <[email protected]> wrote: > It may well not be the cause in your situation, but if you want to rule > out serialisation troubles - turn on the following options in > application.conf: > > akka{ > actor{ > serialize-creators = on > serialize-messages = on > } > } > > This will force your application to attempt to serialise everything that > passes between actors - including Props definitions for the creation of > actors (Where I spotted I'd put a non-serialisable Future inside my Props!). > Best to do this only for your tests, or at least only temporarily leave it > on, put your application into 1 node only and scan your logs for ERROR > messages. > > I had the same situation - everything worked lighting fast with 1 node, > tested our clustering and the systems were still running successfully but > took extraordinary quantities of time. It all came down to a lot of > messages failing serialisation and rolling back to the SQS queues we'd set > up - only to eventually be consumed on the correct node. > > Thanks, > Dan > > On Wednesday, 15 October 2014 14:50:21 UTC+1, Shajahan Palayil wrote: >> >> Hi, >> >> I'm developing an akka based application using event-sourcing and CQRS. >> I'm using the cluster sharding feature of the contrib module, use remote, >> cluster modules and akka-persistence using jdbc persistence on PostgreSQL. >> >> Application has an actor hirarchy like in below diagram, where the top >> level actor is cluster sharded and there is two levels of actors below that >> (which are created using usual context.actorOf() mechanism). >> >> https://drive.google.com/open?id=0ByesuJQ6vK9idWJxalR1NXN6QU0&authuser=0 >> >> I tried to run some load tests on the application. Below are some >> findings, >> >> 1. When running with only single node in the cluster, the application is >> really fast. >> 2. When the application is started with more nodes (than 1), request >> processing get slower gradually for each request being processed. >> >> Initially my assumption was that it has to do with Java serialization and >> network latency. But when I really looked at the logs I could find that >> time taken for a message being sent from actor A to A/1 takes more than a >> second in some cases, and in some other cases its not. >> Please keep in mind that there's no network overhead involved as A to A/1 >> messaging happens on the same node and is just parent->Child messaging >> >> Actors doesn't do much computation other than some simple business rules >> and persisting the events generated from the command. >> >> Questions: >> >> Why is application slow when running on multiple nodes and fast on single >> node. >> >> *Pictures from the profiler:* >> >> *JVM memory attributes:* >> >> https://drive.google.com/open?id=0ByesuJQ6vK9iOHVRSEE0Um51TDQ&authuser=0 >> >> *Thread status:* >> >> https://drive.google.com/open?id=0ByesuJQ6vK9idFM2U3dReGhKODg&authuser=0 >> >> https://drive.google.com/open?id=0ByesuJQ6vK9iSHJLU2F4NUt2WG8&authuser=0 >> >> *OS Attributes:* >> >> https://drive.google.com/open?id=0ByesuJQ6vK9iVzZGSnNiVTJUaHM&authuser=0 >> >> >> Appreciate any pointers in right direction. >> >> Thanks, >> Shajahan. >> > > Notice: This email is confidential and may contain copyright material of > members of the Ocado Group. Opinions and views expressed in this message > may not necessarily reflect the opinions and views of the members of the > Ocado Group. > > If you are not the intended recipient, please notify us immediately and > delete all copies of this message. Please note that it is your > responsibility to scan this message for viruses. > > References to the “Ocado Group” are to Ocado Group plc (registered in > England and Wales with number 7098618) and its subsidiary undertakings (as > that expression is defined in the Companies Act 2006) from time to time. > The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, > Hatfield Business Park, Hatfield, Herts. AL10 9NE. > > -- > >>>>>>>>>> Read the docs: http://akka.io/docs/ > >>>>>>>>>> Check the FAQ: > http://doc.akka.io/docs/akka/current/additional/faq.html > >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user > --- > You received this message because you are subscribed to the Google Groups > "Akka User List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at http://groups.google.com/group/akka-user. > For more options, visit https://groups.google.com/d/optout. > -- Akka Team Typesafe - The software stack for applications that scale Blog: letitcrash.com Twitter: @akkateam -- >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>>>>>>> Check the FAQ: >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
