For a highly scalable distributed service that I'm building, I'm considering using event sourcing pattern. I'm planning to use Akka clustering with persistent actors for this application. I have a few questions related to event sourcing.
- Are there scalability concerns with event sourcing, for e.g., when there are several millions of events in the system ? Has anyone used event sourcing in scale in production ? - What is the best way to handle schema changes in akka persistent actor ? - In practice, what is the best way to handle data corruption ? Is it through snapshots ? - What are the best practices to purge old events without impacting recovery ? Thanks in advance ! Sudhi. -- >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>>>>>>> Check the FAQ: >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
