The project home page says "~2.5 million actors per GB of heap" -- that 
works out to just about 430 bytes per actor. I'm assuming these actors have 
little or no state of their own, and the figure includes the Akka kernel 
itself.

A system I built previously has about 750,000 components it monitors; at 
any time about 10,000 of those are loaded in memory. This application uses 
JMS and runs under WebSphere -- and averages about 1.3GB heap usage.

So, even assuming the actors use 1k each, this entire system would consume 
just around 750MB -- and even adding 250MB for "other" consumption, an 
Akka-based system would use less memory and keep the *entire* problem 
domain in memory. Currently, we deserialize/re-serialize batches of 
components when we receive a batch of messages for them; it looks like with 
Akka we could limit deserialization only to restarts, and serialization 
could happen asynchronously with Akka persistence -- which would 
undoubtedly give us a performance gain.

Am I off base? Is the Akka/actor model that much more efficient? Has anyone 
seen gains like this?

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to