Refering to why i want to know this:

Im using stax because it's free, they are pretty cool and lift/scala
worked out of the box pretty fast(i was having some issues getting
some hosting for whatever i do in my free time with lift/scala). They
have a cluster option right now(still, free) so, they are not the best
machines in the world but it allowed me to play with cluster
information.
No, I don't have a production application that needs to scale to many
boxes ATM... but I sure have a lot of curiosity of how things work!

"Once you get to the 100+ pages
per second range, call me and we'll figure out how to distribute your
app
across more than 1 server. "
I will keep this mail stored in my secure mail secret location as
insurance! hahaha

On Jul 10, 6:58 am, David Pollak <[email protected]>
wrote:
> On Fri, Jul 10, 2009 at 6:50 AM, David Pollak <[email protected]
>
>
>
> > wrote:
>
> > On Fri, Jul 10, 2009 at 12:40 AM, DFectuoso <[email protected]>wrote:
>
> >> Thanks, that was very useful, to enable sticky variables i would do
> >> something like (explain here
> >>http://wiki.stax.net/w/index.php/Application_Clustering
> >> ) that?
>
> >> So bottom line? An actor can send a message an actor that is living in
> >> another JVM
>
> > Scala's remote actors are very, very fragile and have not been used in any
> > kind of production or load environment to my knowledge.  They rely of Java's
> > serialization mechanism which has a series of problems (fragile in the face
> > of different class versions on different nodes, the tendency to serialize
> > the world because of references to globals, and the unsolved problem of
> > serializing Scala singleton objects).
>
> > I'm addressing these issues with Goat Rodeo (http://goatrodeo.org), but it
> > won't be ready for other people to mess with for a couple of months.
>
> And a follow up... what kind of application do you have that requires more
> throughput than a single JVM can handle?  In benchmarks that I've done, a
> big RDBMS gets saturated around the same time that a single Lift app gets
> saturated.  A big RDBMS can handle about 10K requests per second if most of
> those requests are reads and they come from cache.  If you have ACID enabled
> (so every write goes to disk), best case is 2K writes per second.
>
> A well tuned Lift app can serve 2K dynamically generated pages on a quad
> core box.  If each page has a couple of read queries and a write query,
> you're pretty close to saturating you RDBMS.
>
> So, unless you're at Twitter/Facebook/LinkedIn levels of traffic, you'd be
> best designing your app for a single JVM.  Once you get to the 100+ pages
> per second range, call me and we'll figure out how to distribute your app
> across more than 1 server.
>
>
>
>
>
> >> using sticky variables(or anything else) (sorry i don't
> >> know if that terracota, don't know what that is)
>
> >> On Jul 9, 11:53 pm, "marius d." <[email protected]> wrote:
> >> > Actors are local to the JVM. Scala also has RemoteActors but we don't
> >> > really use them. For a lift app in a cluster environment we have to
> >> > have sticky sessions concept and the reason is that functions bound to
> >> > a session and mostly the references they are holding are not
> >> > serialized & distributed. So assuming:
>
> >> > 1. Session 1 is created on Node 1
> >> > 2. If on a subsequent request (pertaining to the Session 1) load
> >> > balancer decides to dispatch the request on Node 2 you are loosing all
> >> > session context including bound functions etc.
>
> >> > This is why the load balancer must guarantee that all requests
> >> > pertaining to the same session needs to be dispatched on the same
> >> > node.
>
> >> > There were some efforts in the past to integrate Terracotta but I
> >> > guess there was a dead end somewhere.
>
> >> > You can of course build you own app to not use functions bound to a
> >> > session and only rely on DispatchPf style (somehow similar with Spring
> >> > controllers) but that's not very lift-ish. But in this case you can
> >> > persist your state in DB (which is common to all nodes) and when a
> >> > request comes you just fetch the context data from DB and set your
> >> > SessionVars. The problem with functions kept on the session is that
> >> > those function can be lambda expression referencing members from other
> >> > classes which are not serializable etc. And even if they somehow were
> >> > Java serialization is bad for performance.
>
> >> > The bottom line is that sticky sessions have the benefit of the
> >> > performance because there is no state that needs to be distributed and
> >> > replicated among all cluster nodes OR no need to persist the session
> >> > state. But the drawback is that requests pertaining to the same
> >> > session needs to be processed by the same node.
>
> >> > IMHO using Lift apps in a cluster env. without sticky sessions can be
> >> > a very tricky thing to achieve.
>
> >> > Br's,
> >> > Marius
>
> >> > On Jul 10, 6:32 am, DFectuoso <[email protected]> wrote:
>
> >> > > I'm hosting some experiments on Stax and right now im pondering over
> >> > > the idea of checking out how to have a database backed session so the
> >> > > SessionVars work in a cluster of 5 boxes; With that in mind, have
> >> > > anyone worked with actors and clustering? Is there some documentation
> >> > > around that? should it work out of the box, or some works of
> >> > > encouragement to try working on this terrain?
>
> > --
> > Lift, the simply functional web frameworkhttp://liftweb.net
> > Beginning Scalahttp://www.apress.com/book/view/1430219890
> > Follow me:http://twitter.com/dpp
> > Git some:http://github.com/dpp
>
> --
> Lift, the simply functional web frameworkhttp://liftweb.net
> Beginning Scalahttp://www.apress.com/book/view/1430219890
> Follow me:http://twitter.com/dpp
> Git some:http://github.com/dpp

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Lift" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/liftweb?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to