1) Vast market inertia of installed base for 2.5" SAS hotswap trays/servers and associated things.
2) They're recognized by any recent Linux kernel. Windows may need drivers. In some cases they are non-bootable, you must have your boot partition on another type of device. 3) If you're at such a high end that you need these devices you probably have a cluster of >20 identical servers. Doing RAID may be pointless, see: http://www.reddit.com/r/sysadmin/comments/2pysn8/cloudflare_agrees_disk_performance_is_improved/ http://blog.ioflood.com/2014/12/20/cloudflare-agrees-disk-performance-is-improved-when-you-get-rid-of-hardware-raid/ You can achieve greater redundancy by having a pair of servers mirror each other than just RAID-1 or RAID-10 on a single box. On Mon, Dec 22, 2014 at 4:57 PM, Ken Hohhof via Af <[email protected]> wrote: > So I've been impressed lately with the performance improvements to > personal computers and I/O intensive servers like web and mail servers by > replacing HDDs with SSDs. I'm convinced the emphasis on CPU and memory is > often misplaced and the key is disk read/write performance. I think part > of this is our use of computers has gone from computing oriented to data > oriented. Big, big data. The one exception perhaps being games, but is > that CPU intensive or GPU intensive? > > So I've noticed there are enterprise SSD cards that go in a PCI-E slot > like Intel S3700, Huawei ES3000, Samsung SM1715. The performance numbers > sound comparable to a very expensive RAID array of SAS drives. It does > raise the question, why are we making SSDs look like HDDs including form > factor and electrical interface, other than for the hot swap capability of > SATA/SAS? > > Has anyone used these things? Are they automatically recognized by > Windows and Linux as disk drives? Do you need to load special drives and > jump through special hoops? Is there any point trying to do RAID with > these, and can that even be done? > >
