http://www.howtoforge.com/installation-and-setup-guide-for-drbd-openais-pacemaker-xen-on-opensuse-11.1Installation And Setup Guide For DRBD, OpenAIS, Pacemaker + Xen On OpenSUSE 11.1Submitted by bhellman (Contact Author) (Forums) on
Mon, 2009-08-17 18:48. :: Linux | SuSE | High-Availability
| Virtualization The following will install and configure DRBD, OpenAIS, Pacemaker and Xen on OpenSUSE 11.1 to provide highly-available virtual machines. This setup does not utilize Xen's live migration capabilities. Instead, VMs will be started on the secondary node as soon as failure of the primary is detected. Xen virtual disk images are replicated between nodes using DRBD and all services on the cluster will be managed by OpenAIS and Pacemaker. The following setup utilizes DRBD 8.3.2 and Pacemaker 1.0.4. It is important to note that DRBD 8.3.2 has come a long way since previous versions in terms of compatibility with Pacemaker. In particular, a new DRBD OCF resource agent script and new DRBD-level resource fencing features. This configuration will not work with older releases of DRBD. This document does not cover the configuration of Xen virtual machines. Instead, it is assumed you have a working virtual machine configured locally with a file-based disk image. As an example, our domU resource will manage a Debian virtual machine configured in debian.cfg. Visit these links for more information on any of these components as well as additional documentation: DRBD - http://www.drbd.org Contents: 1. Install Xen
1. Install XenThe easiest way to install Xen and its prerequisites is through the yast command line tool: # yast Choose 'Virtualization' -> 'Install Hypervisor and tools'. If you're working on a remote server you may need to answer 'No' when asked about installing graphical components. Select 'Yes' when prompted about Xen Network Bridge. Select 'System' -> 'Boot Loader' and set the Xen kernel as the default kernel. Reboot. At this point, the Xen kernel should be booted and a network interface br0 should be configured as a bridge to eth0.
2. Install and Configure DRBDCompile and install on both nodes: # cd /usr/src Edit /etc/drbd.conf: global { usage-count no; } common { protocol C; } resource r0 { disk { fencing resource-only; } handlers { # these handlers are necessary for drbd 8.3 + pacemaker compatibility fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; } syncer { rate 40M; } on alpha { device /dev/drbd0; disk /dev/sdb1; address 192.168.10.22:7789; meta-disk internal; } on bravo { device /dev/drbd0; disk /dev/sdb1; address 192.168.10.23:7789; meta-disk internal; } Copy to other node: alpha:~ # scp /etc/drbd.conf r...@bravo:/etc/drbd.conf Create meta-data: alpha:~ # drbdadm create-md r0 Start DRBD: alpha:~ # /etc/init.d/drbd start Starting DRBD resources: [ d(r0) s(r0) n(r0) ].. bravo:~ # /etc/init.d/drbd start Starting DRBD resources: [ d(r0) s(r0) n(r0) ]. After DRBD has started and connected, look at /proc/drbd on either node to get the status of the resource: # cat /proc/drbd GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e
build by r...@alpha, 2009-07-31 14:27:05 Sync resource: alpha:~ # drbdadm -- --overwrite-data-of-peer primary r0 As of DRBD 8.3.2, a new feature has been added to skip the initial sync if desired: NOTE: This is only intended for disks that are either blank or have the exact same data. alpha:~ # drbdadm -- --clear-bitmap new-current-uuid r0
3. Install and configure OpenAIS + PacemakerPrerequisites: zypper install tcl-devel ncurses-devel tcl Obtain and install the latest versions of the HA utilities: wget
http://download.opensuse.org/repositories/openSUSE:/11.1/standard/i586/OpenIPMI-2.0.14-1.35.i586.rpm Create AIS key: alpha:~ # ais-keygen bravo:~ # ais-keygen Edit /etc/ais/openais.conf: aisexec { user: root group: root } service { name: pacemaker ver: 0 } totem { version: 2 token: 1000 hold: 180 token_retransmits_before_loss_const: 20 join: 60 consensus: 4800 vsftype: none max_messages: 20 clear_node_high_bit: yes secauth: off threads: 0 interface { ringnumber: 0 bindnetaddr: 192.168.10.0 mcastaddr: 226.94.1.2 mcastport: 5406 } } logging { debug: off fileline: off to_syslog: yes to_stderr: no syslog_facility: daemon timestamp: on } amf { mode: disabled } Copy to other node: alpha:~ # scp /etc/ais/openais.conf r...@bravo:/etc/ais/openais.conf Start OpenAIS: alpha:~ # /etc/init.d/openais start Configure the default cluster options: alpha:~ # crm A two node cluster should not be concerned with quorum. STONITH is disabled in this configuration though it is highly-recommended in any production environment to eliminate the risk of divergent data. A default resource stickiness of 1000 will keep resources where they are after a fail-over and prevent them from returning to a failed node after coming back online.
4. Configure DRBD Master/Slave Resourcealpha:~ # crm configure At this point, Pacemaker is handling the DRBD resource r0. Check crm_mon to make sure.
5. Configure File System ResourceFor this setup there will be one file system resource that runs on the DRBD master and virtual machine resources that run on whichever node the file system is mounted. Create file system and mount points: [r...@alpha ~]# mkfs.ext3 /dev/drbd0 Note, you must run the mkfs command on whichever node is the current Master/Primary. Copy the existing virtual machine configuration file and disk image to the shared storage: [r...@alpha ~]# mount /dev/drbd0 /xen Note: Do not forget to update debian.cfg to point to the new location of the disk image. Configure file system resource, constrain it to run with and after DRBD: [r...@alpha ~]# crm configure
6. Configure domU ResourcedomU's will be configured to use virtual disk images that are stored on the DRBD resource mounted at /xen. It is not required but is a good idea to also store domU configuration files on the shared resource. Configure Xen domU resource and constrain it to run with and after xen_fs: [r...@alpha ~]# crm configure The file system and domU resource should now be running on whichever node is the DRBD primary: Online: [ alpha bravo ] Master/Slave Set: ms_drbd_xen Masters: [ alpha ] Slaves: [ bravo ] xen_fs (ocf::heartbeat:Filesystem): Started alpha debian (ocf::heartbeat:Xen): Started alpha The Debian virutal machine as well as its backing storage are now configured for full redundancy as well high-availability. Should host alpha fail, services will automatically fail-over to bravo. This configuration can be expanded to include any number of virtual machines assuming they adhere to the storage and memory constraints of the environment. To do so, simply repeat 6. Configure domU Resource for each domU.
7. Additional InformationLINBIT has led the way in high-availability since 2001, and continues to be the market leader in business uptime, disaster recovery, and continuity solutions. Built on a solid base of Austrian software engineering and open-source technology, DRBD is the industry standard for high-availability and data redundancy for mission critical systems. For more information on how LINBIT can evolve your IT infrastructure call 1-877-4-LINBIT, visit http://www.linbit.com, or join us on irc.freenode.net in #DRBD |