Lot of details you need to fill in.  50 user production server doing what?  
File share, large or lots of small files?  SQL server, OLAP or OLTP loads?  
Then there's the technology of the SSD drives.  Not just the MLC/SLC tech but 
drives with brains that can handle raid configurations.  If you look closely at 
disk I/O on SSD's most of the high end drives top out at the 240GB level.  
Larger drives can start to decrease in performance.  Given it's still faster 
than traditional drives but is the $/GB worth it to you.  You didn't mention 
how much space you needed.  How high end is your controller and does its HCL 
have specific SSD's on it?  I'm just guessing that SSD's may be overkill for 
that you are talking about.  Will the users really see improvements in their 
interactions with the server?  They won't care if the server takes 20 seconds 
to boot or 2 minutes.

That being said I love to jump on the newest tech out there and see what it can 
do.  But I've learned my lessons to never put something in production I don't 
fully understand.  I've been messing with SSD's for years, mainly using them in 
equipment in hostel environments.  The early cheap MLC drives I used tended to 
fail quite often.  I've hardly had any issues with new high end SLC drives 
other than older SATA controllers not liking them.  I boot my ESX hosts from 
them at home for power savings (guests are on my QNAP).  Have a few laptops and 
desktops and they all work great.  I've tested putting a single high end OCZ 
SSD in my QNAP and running a virtual guest off it.  In my case performance test 
showed about a 25% improvement in guest disk I/O vs the Equallogic array it was 
on before.  But even that is comparing apples to oranges.

________________________________
From: David Lum [mailto:[email protected]]
Sent: Tuesday, January 08, 2013 9:17 AM
To: NT System Admin Issues
Subject: SSD and 2008 R2 Hyper-V, SAS vs. SATA SSD

Some of you may remember I fought a little with putting an SSD drive in my old 
home lab PowerEdge 840 but I did finally get it to work. I've been running 2008 
R2 Hyper-V server on SSD for about a week now and all I can say is holy crap! 
The boot times (compared to the previous platter SATA drives) are insane. I had 
no idea a server OS could boot so fast! I haven't timed it, but I'd guess it's 
less than 10 seconds from the end of POST to me being able to RDP to it.

My question is....for a 50-user production server which would be faster - SAS 
or SATA SSD for the OS? Something I find little discussion on in the controller 
architecture (SATA SSD's vs. SAS disks) and performance with varying levels 
concurrent client connections. SATA drives now have NCQ, does this 
negate/mitigate the traditional SCSI advantage?
David Lum
Sr. Systems Engineer // NWEATM
Office 503.548.5229 // Cell (voice/text) 503.267.9764


~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
[email protected]<mailto:[email protected]>
with the body: unsubscribe ntsysadmin

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to [email protected]
with the body: unsubscribe ntsysadmin

Reply via email to