Just tested it again personally on my farm - works like a charm,
slaves being attached with exact the same snapshot-based volume and
mounted according to setting.
I am seeing in your log that you're manually detaching and attaching
Auto-EBS volumes, which is not a right thing to do.

On 28 дек, 09:45, Ken M <[email protected]> wrote:
> Not sure if other EBS issues are resolved, but now I'm having sort of
> the opposite problem.
>
> With the new script to prevent snapshot reloading on mysql machines
> with ebs, the master reboots and attaches sucessfully to the last used
> volume.
>
> However, the slaves are getting the most recent snapshots at all.
> Even though the role is configured to auto-attach to snapshot, it
> isn't for me on the slaves.
>
> On Dec 14, 4:28 am, Alex Kovalyov <[email protected]> wrote:
>
> > > I have just run across this problem as well.
>
> > This apparently wasn't a problem? - Kevin was detaching auto-attached
> > volumes by hands.
>
> > > When I terminated the farm, I selected to keep theEBSvolumes,
> > > however, when the farm terminated, the volumes completely disappeared
> > > and had to be re-created on boot.  This also happened when I did a
> > > "synchronize to all" on the instance.  This defeats the purpose of
> > >EBS... and really screws up the ability to keep a persistent copy of
> > > up-to-the-minute data, which makes it impossible to keep logfiles or
> > > run applications that change their state (like users uploading images,
> > > etc).  I don't see anything in the logs regarding the volumes being
> > > deleted.  The farm id in question is 959 if that helps.
>
> > Do you know IDs of volumes that disappeared?
> > Can you help us to reproduce this?
> > Thanks alot.
>
> > On 12.12.08 01:23, "jmochs" <[email protected]> wrote:
>
> > > I have just run across this problem as well.
>
> > > When I terminated the farm, I selected to keep theEBSvolumes,
> > > however, when the farm terminated, the volumes completely disappeared
> > > and had to be re-created on boot.  This also happened when I did a
> > > "synchronize to all" on the instance.  This defeats the purpose of
> > >EBS... and really screws up the ability to keep a persistent copy of
> > > up-to-the-minute data, which makes it impossible to keep logfiles or
> > > run applications that change their state (like users uploading images,
> > > etc).  I don't see anything in the logs regarding the volumes being
> > > deleted.  The farm id in question is 959 if that helps.
>
> > > Thanks,
> > > James
>
> > > On Dec 11, 2:56 pm, Kevin Baker <[email protected]> wrote:
> > >> On Dec 11, 2008, at 3:53 PM, Kevin Baker <[email protected]> wrote:
>
> > >>> On Dec 10, 2008, at 9:52 AM, Alex Kovalyov <[email protected]>  
> > >>> wrote:
>
> > >>>> Dear Kevin,
>
> > >>>> We've spend few hours analyzing your logs and came to conclusion  
> > >>>> that you
> > >>>> were detaching volumes that Scalr attaches for you automatically and
> > >>>> attaching another one manually.
> > >>>> Is this a right conclusion?
>
> > >>> Thank you tons for taking the time. Yes this the correct conclusion.  
> > >>> However I am trying to setup a confit very similiar to Ken M.
>
> > >>> I want myEBSTo automatically remount to an instance. The snapshot  
> > >>> is fine for backups, but the goal is to use the inherant data  
> > >>> persistance features of theEBS. This would require binding anEBS 
> > >>> to a role and having any new instances scalr boots to a role check  
> > >>> for orphanedEBSinstances.
>
> > >>> I was understanding
>
> > >> Oops, hut the send button ;)
>
> > >> It was my understanding. From the scalr role wizard that this was  
> > >> standard  behavior.
>
> > >>>> If you want some particular volume to be attached, just make a  
> > >>>> snapshot of
> > >>>> it, and choose " Attach volume from snapshot:" onEBStab.
>
> > >>>> On 10.12.08 17:45, "Kevin Baker" <[email protected]> wrote:
>
> > >>>>> I will be retesting with a fresh image today, and will update this
> > >>>>> thread.
>
> > >>>>> On Dec 10, 2008, at 1:21 AM, Sam <[email protected]> wrote:
>
> > >>>>>> This is worrying as my application heavily rely on the  
> > >>>>>> application to
> > >>>>>> attach to the already existingEBSand if it attaches newEBS
> > >>>>>> everytime my application will fail.
> > >>>>>> Kevin, did you recently checked this after a few problems were  
> > >>>>>> fixed
> > >>>>>> related toEBS?
>
> > >>>>>> On Dec 9, 7:28 pm, Kevin Baker <[email protected]> wrote:
> > >>>>>>> Alex,
>
> > >>>>>>> I have had anEBSinstance attach successfully each time, however
> > >>>>>>> it is
> > >>>>>>> always a newEBSinstance.
>
> > >>>>>>> It goes something like this:
>
> > >>>>>>>   * CreateEBS
> > >>>>>>>   * Format and attach to Instance
> > >>>>>>>   * restart instance
> > >>>>>>>   * newEBSis created
> > >>>>>>>   * PreviousEBSis orphaned
>
> > >>>>>>> So each time I have anEBS, but just not the previous one.
>
> > >>>>>>> Alex Kovalyov wrote:
> > >>>>>>>> We've checked your log and found all attach and mount attempts
> > >>>>>>>> successful.
> > >>>>>>>> Please provide more details.
>
> > >>>>>>>> On 8 дек, 22:55, Kevin Baker <[email protected]> wrote:
>
> > >>>>>>>>> So would this apply to anyEBSthat would have been attached on
> > >>>>>>>>> startup?
>
> > >>>>>>>>> I have anEBSon my MySQL instance that has not been  
> > >>>>>>>>> automatically
> > >>>>>>>>> reattaching on restart.
>
> > >>>>>>>>> Alex Kovalyov wrote:
>
> > >>>>>>>>>> Confirmed - this was a bug  on our side (fixed).
>
> > >>>>>>>>>> We weren't checking that attachmentSet->status is 'attached'
> > >>>>>>>>>> before
> > >>>>>>>>>> initiating a volume mount on instance, only checked volume
> > >>>>>>>>>> status to
> > >>>>>>>>>> be "in-use".
>
> > >>>>>>>>>> On 8 дек, 11:59, Sam <[email protected]> wrote:
>
> > >>>>>>>>>>> I have anEBSstorage attached to my app instance. I have  
> > >>>>>>>>>>> mkdir
> > >>>>>>>>>>> fjstorage on application server and then created 2 symlinks to
> > >>>>>>>>>>> store
> > >>>>>>>>>>> the local files on the server. Then I terminated and
> > >>>>>>>>>>> synchronised the
> > >>>>>>>>>>> farm and restarted the farm. Everything worked fine. I stopped
> > >>>>>>>>>>> the
> > >>>>>>>>>>> farm on friday and then restarted it today.
> > >>>>>>>>>>> Everything else is fine on the app server but the /fjstorage
> > >>>>>>>>>>> directory
> > >>>>>>>>>>> whereEBSis to be mounted does not exist.
> > >>>>>>>>>>> My settings forEBSin the app server role was to create 10GB
> > >>>>>>>>>>> storage
> > >>>>>>>>>>> and attach it automatically to /fjstorage directory on the app
> > >>>>>>>>>>> server.
> > >>>>>>>>>>> I had similar problem before which I was told that they had
> > >>>>>>>>>>> fixed it.
> > >>>>>>>>>>> This is again happening.
> > >>>>>>>>>>> Could you please help urgently as we are going to put an
> > >>>>>>>>>>> application
> > >>>>>>>>>>> live this week and this sort of problem withEBSstorage
> > >>>>>>>>>>> defnitely
> > >>>>>>>>>>> won't help.
>
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"scalr-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/scalr-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to