With  RDM  versus the method Kent described. It's a bit more complicated and
will prevent snapshots and vmotion.


Basically follow what he said but instead of making a vmdk disk  choose RDM
and select  a  LUN.


Then make sure that machine is NOT powered on, log into the esx host and
move the RDM file to say /vmfs/volumes/volume_name/RawDeviceMaps ( you need
to make that folder).

Next manually edit the VMX for that host and change it path to the RDM to
where every  you moved it to.


Now you can create new clones  of your base  template, and add the RDM drive
to it ( as ken mentioned , its VERY important), pointing to the
RawDeviceMaps folder and the correct RDM file for that LUN.



This approach has many issue  so I'm planning on moving away from it.



1)      You can't clone

2)      You can't  snapshot

3)      You can't vmotion

4)      If you delete a host that has that drive attached you completely
destroy the RDM file. (BAD JOJO)

I you do need  to have cluster in such an environment I would suggest a
combination of the 2 approaches.




1)      Build a new LUN and  make it VMFS and let the  ESX hosts discover

2)      Create the VMDK's on that LUN not in you main VMFS for  VM's

3)      Make sure you set any  OCFS drive to separate controller and
physical, persistent  ( so it won't snapshot it)


You should retain snap/vmotion. But we aware. I am not sure if cloning will
make a  new vmdk on your VMFS volume you make for the ocfs drives. So I
would have a base template I clone, then add that drive to the clone ( to
guarantee the drives location).



It's a bit more work that just saving the VMDK to the  VM's folder on your
main VMFS, but it separates the OCFS drives to another  LUN. So you could
easily stop your cluster,  take a snapshot of the lun for backups and bring
them back up.  Limiting your downtime window. Might be over kill  depend on
the companies backup stance.



Hope it helps


From: ocfs2-users-boun...@oss.oracle.com
[mailto:ocfs2-users-boun...@oss.oracle.com] On Behalf Of Rankin, Kent
Sent: Monday, July 28, 2008 9:13 PM
To: Haydn Cahir; ocfs2-users@oss.oracle.com
Subject: Re: [Ocfs2-users] OCFS2 and VMware ESX


What I did a few days ago was to create a vmware disk for each OCFS2
filesystem, and store it with one of the VM nodes.  Then, add that disk to
each additional VM.  When you add it, use a separate SCSI host number.  In
other words, if the OS is on SCSI 0:0, make the disk SCSI 1:0, or some
arbitrary other HBA number.  Then you can go to each hosts second VM SCSI
device and modify it to be shared, and of type Physical (if I remember
correctly).  At that point, it works fine.

Kent Rankin

-----Original Message-----
From: ocfs2-users-boun...@oss.oracle.com on behalf of Haydn Cahir
Sent: Mon 7/28/2008 9:48 PM
To: ocfs2-users@oss.oracle.com
Subject: [Ocfs2-users] OCFS2 and VMware ESX


We are haing some serious issues trying to configure an OCFS2 cluster on 3
SLES 10 SP2 boxes running in VMware ESX 3.0.1. Before I go into any of the
detailed errors we are experiencing I first wanted to ask everyone if they
have successfully configured this solution? We would be interested to find
out what needs to be set at the VMware level (RDM, VMFS, NICS etc) and what
needs to be configured at the O/S level. We have a LUN on our SAN that we
have presented to our VMware hosts that we are using for this.

Any help would be greatly appreciated!

Ocfs2-users mailing list

No virus found in this outgoing message.
Checked by AVG - www.avg.com
Version: 8.5.423 / Virus Database: 270.14.26/2451 - Release Date: 10/22/09 
Ocfs2-users mailing list

Reply via email to