Hi Martin and Philipp. Thanks for your email. What you are saying sounds great. I love Scala Actors and I know its an important thing that brings people over to Scala.
I hope that I didn't offend you. You have done amazing things with and for Scala. I really respect you guys. But I saw and felt the need for something like Akka and went away and build it. What you say about the trade-offs in Actor expressiveness is really true. I (and David I think) have not seen that much need for nested receive/react and therefore I have not included in the impl but rather focussing on other things that I find more important. This articles sums it up pretty well: http://erikengbrecht.blogspot.com/2009/06/pondering-actor-design-trades.html Looking forward to 2.8 and the new actor impl. /Jonas 2009/9/30 martin <oder...@gmail.com>: > > About actors in Scala 2.8: > > . they have been refactored substantially compared to what's in the > 2.7.x branch > . Philipp has sent mails about this to scala-internals (05/31) > . Philipp has invited DPP to look at the refactorings in 2.8 (07/21) > to which > he responded positively. > . The ForkJoinPool in 2.8 is completely different from FJTask in > 2.7.5; it's the version that's going into JDK7. It has been > battle-tested and should not suffer from any memory leaks. > > The reason why Scala actors use the FJ framework is performance, in > particular on multi-core hardware. So we do not think it's a good idea > to go back to java.util.concurrent, except maybe for applications with > very specialized demands. > > We think the main problem was that lift depends on Scala 2.7.x, and > that the actor refactorings have not gone into the 2.7.x branch. The > result is that people have not noticed the changes. For example, most > of the issues that Erik raises in his blog post no longer apply to > Scala 2.8. Initially we wanted 2.8 to be out by now, but it's taken > much longer than we have foreseen, because some of the problems were > harder than initially thought. We are sorry to have left the 2.7 > branch relatively unattended for so long. It's difficult for us, > though, to provide the resources to support two diverging branches in > parallel. More community support with backports etc could help. > > To fix the concrete issue at hand, we replaced FJTask with (a backport > of) java.util.concurrent.ThreadPoolExecutor in the Scala 2.7.x branch, > to be released as 2.7.7. That takes care of the memory leaks in > FJTask. > > Now to the larger picture. We are not at all wedded to Scala actors > here; after all it's just a library. If there are others which fulfill > some needs better, great! But we have to be honest to avoid confusion. > One of the main differences between Scala actors and lift actors and > Akka seems to be that only Scala actors provide nested receives, so > only Scala actors really let you avoid an inversion of control. This > is a feature which complicates the implementation considerably, and > that's what all our main results are about. You might not care about > this particular feature in your code, and consequently you might > choose a different abstraction. But calling that abstraction simply > `actors' causes unnecessary confusion, in our opinion. And that's not > good for the goal of convincing people that actors are a useful > concurrency abstraction. So, nothing against lift actors and Akka, but > we need to be precise about the tradeoffs. Maybe call them `flat > actors' or something like that. > > Martin and Philipp > > > > -- Jonas Bonér twitter: @jboner blog: http://jonasboner.com work: http://crisp.se work: http://scalablesolutions.se code: http://github.com/jboner code: http://akkasource.org --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Lift" group. To post to this group, send email to firstname.lastname@example.org To unsubscribe from this group, send email to liftweb+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/liftweb?hl=en -~----------~----~----~----~------~----~------~--~---