I guess that if I remove the EBS dependency it might work. I wonder if there is some way keeping this.
Gluster is itself clustered or replicated so you can keep the master redundant enough I believe. On Tue, Mar 17, 2009 at 10:40 PM, Marten Nelson <[email protected]>wrote: > > Maybe this wouldn't work for your app, but I think this is where a > cluster solution with EBS (gluster?) could help out. You store all > application files on the cluster server. Your app servers are nodes > and load files from the EBS attached to your cluster server. But then, > how do you create redundancy at the cluster server? That I don't know... > > M > > On Mar 17, 2009, at 1:27 PM, Arie Fishler wrote: > > > I have a problem setting a procedure to synchronize my app servers > > and avoiding downtime at the current way scalr operates. > > > > I am trying to figure out what will be the best way to approach this. > > > > The simplest example is having one app server. Let's say that my app > > id dependent on the fact that an EBS volume is attached and mounted > > to the app server. > > > > If I synchronize all, a new app server is started but not a new EBS > > volume. Then the current running app server is taken down (= > > downtime since the app on the new server has not yet started), the > > EBS volume is freed and attached to the new server. > > Only then can the application start. > > > > The downtime can occur exactly the same when more app servers are > > involved. The fact that same EBS volumes are attached to the newly > > restarted instances is a problem since my application cannot start > > without an EBS. > > > > Any idea around this? > > > > Arie > > > > > > > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "scalr-discuss" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/scalr-discuss?hl=en -~----------~----~----~----~------~----~------~--~---
