I'm basically doing the same thing you want to, just not the amount of
data you are talking about.  I have my 2008R2 VM connected to it's file
share data via Iscsi initiator inside the VM.  That way the volumes can
be directly snapped on my EQ boxes without any concern for the VM
itself.  I've never had to use it though with shadow copies turned on.
My PS5000 and 6000 are both configured for Raid 50.  With the VMWare HA
in place I don't really see the need for clustered storage, but your
requirements may be different.  I'm not doing MPIO on the file server,
don't need to for the few hundred meg of data on it.  But I am doing
MPIO for my VM SQL Boxes.  And soon Exchange. VMWare has a excellent
white paper on best practices for Exchange 2010 running in a VM using
Iscsi and MPIO.  

-----Original Message-----
From: Ian Roche [mailto:[email protected]] 
Sent: Thursday, October 28, 2010 10:02 AM
To: NT System Admin Issues
Subject: Virtualization of file server - storage recommendations.

Hi Guys,

I am about to undertake a new project which as the subject line states
its turning our current file server into a vm. Its currently a clustered
windows system (2 x nx1950's) connected via SAS to an md3000 + md1000.
The quantity of data living on the storage is in or around 6TB (allot of
this would be static) and its split over 2TB drives.The. We have a
relatively new vmware environment consisting of 4 hosts (Dell R900's and
M710 blades running ESXi4) they are connected to ISCSI Equallogic PS6000
units which are all in a RAID50 config.

That's the hardware background, my query is around the virtualization of
the storage for the virtual file server. I have read a number of article
but would be interested if anyone has "real world" experience of what
the feel works or definitely doesn't work. My plan is to have a small
front end vm running win srv 2008 r2 ( just a single vm that will be
protected by vmware HA) sitting in front of the storage . My question is
how do you guys think its best presented I'm leaning towards standard
vmfs datastores on dedicated 2tb luns purely being accessed by this new
vm or using the equallogic mpio and the windows iscsi initiator to
present the storage this way (which we have done on a couple of sql
boxes already). From what I have read so far I don't see any particular
reason to go down the RDM route other than some people saying if you
have large drives the rdm path can be a good idea. I don't see a huge
performance difference between vmfs and rdm in any of the documentation
I have read to date. 


I would appreciate any comments or suggestions on this one its at an
early stage but I just want to get the map down for the migration.
Whatever route I choose the swap over will be done inline, I will
migrate the data to the new vm (using robocopy) kill access to the
physical box do a final diff and name the virtual system to the correct
name (the fqdn is very important in my company it referenced in allot of
build scripts as we are a software house).

Cheers
Ian
~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~
<http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

---
To manage subscriptions click here:
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to [email protected]
with the body: unsubscribe ntsysadmin

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to [email protected]
with the body: unsubscribe ntsysadmin

Reply via email to