Ok, you are using the Xen hypervisor, not Xenserver. I would like to achieve my replication with drbd and xenserver.
But i was not thinking about doing it at the hypervisor level but in the vm level. 2014-10-10 15:40 GMT+02:00 France <[email protected]>: > For ACS we are using XS 6.0.2+hotfixes. > For the “old cloud” where management servers reside in a form of XEN PV > machines we are using RHEL. > OCFS2 is not needed when virtual machine uses DRBD device directly. We > have one DRBD device per each VM. > > Regards, > F. > > On 09 Oct 2014, at 16:13, benoit lair <[email protected]> wrote: > > > Hello, > > > > Ok, bu i would like to have a separated mysql db server and mgmt servers. > > Are you using KVM or xenserver ? > > With your active/active drbd cluster, are you using ocfs2 ? > > > > 2014-10-08 12:34 GMT+02:00 France <[email protected]>: > > > >> Put the ACS java app in the same server and it will always get working > >> mysql server when it is working. > >> Also I suggest you start using pacemaker, corosync or cman. > >> > >> My management server is actually a virtual instance on RHEL > active/active > >> drbd cluster (so live migration works). > >> > >> If you want to test how it behaves, stop mysql and check for yourself. I > >> highly doubt anyone has tested it yet. > >> > >> Regards, > >> F. > >> > >> On 08 Oct 2014, at 11:37, benoit lair <[email protected]> wrote: > >> > >>> Hello Folks, > >>> > >>> > >>> I'm trying new HA implementation of my mgmt server. > >>> > >>> I'm looking for HA for the mysql server. > >>> > >>> Could it be problematic if i install a drbd active/passive mysql > cluster > >> ? > >>> (drbd & heartbeat) > >>> > >>> The reason is because if i a have a fail of my primary server (and so > the > >>> cluster doing a failover and transmiting its VIP (heartbeat) to the > other > >>> node), the mysql server doesn't respond during few seconds (due to the > >>> deadtime parameter). > >>> > >>> So is this scenario problematic for the integrity of the management > >> server ? > >>> > >>> > >>> Thanks for your responses. > >> > >> > >
