I started a slave instance of my mysql role.  The behavior ended up
being quite surprising: a new mysql instance began as expected: a new
EBS volume was created and mounted as expected.

I started mysql on the slave and was poking around to see if it was
mirroring.  It didn't look like it was.  But that's not the item.

For some reason, my master suddenly became inaccessible.  Couldn't
ping it, lost my ssh session.  Scalr said it was up and running, but I
couldn't ssh from the scalr web client either - it just hung.   The
DNS zone continued to point the the IP of the master but that IP was
dead.

Evenutally (10 minutes later or so) I rebooted the master which worked
- surprising since I couldn't ping the IP.

When the former master came up, it was the slave as the former slave
(which had no data since it wasn't replicating) became the master
during the reboot.

So I terminated the slave.  It terminated fine.

But the former-master-now-slave with the real data on it stayed the
slave.  It's ten minutes more now and there's no sign it will be
promoted - I've got a farm running with one mysql slave and no
masters.

What did I do wrong?

Rod
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"scalr-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/scalr-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to