My experience of running normal (read mostly) Linux filesystems on solid
state media is that SSD is more robust but far less reliable than rotating
media.

MTBF for rotating media for me has been around 10 years. MTBF for SSD has
been about 2. And the SSD always seems to fail catastrophically - appearing
to work fine one day, then after an inexplicable crash, half the media is
unreadable. I assume this is something to do with the wear leveling, which
successfully gets most of the blocks to wear out at the same time with no
warning. If I reformat and reload the SSD to correct all the mysterious
corruptions, it last a few weeks, then does the same thing again.

I have had servers running of solid state media continuously since about
2003. Using PATA to CF adapters initially, currently uSD in raspberry pi
etc, and 2.5" SATA SSD drives. I used to use mostly SCSI rotating media, so
my reliability may have been better than cheaper PC drives. I had quite a
few (probably 90%) of the 1GB Wren 7 drives retired after 10-15 years of
running 24/7 under very heavy load (in an on-air broadcast system) with no
signs of trouble. The 2.5" SATA form factor SSD's seem to last better -
perhaps indicating that the advertised capacity is a smaller proportion of
overall capacity available to level the wear over..

I don't have a large number of servers, so not really a big enough sample
to draw definite conclusions from, but enough to make me wary of relying
too much on SSD's as a panacea.

My current server configuration is a uSD system drive, with a much larger
rotating disk that spins down when not in use (generally only gets used
when a user is logged in), and an up to date backup of everything on the
uSD is kept on the rotating media. I am not keen on having SSD as a swap
device, unless you have multiple SSD's, in which case you just treat the
swap/tmp media as disposable. If I am short of ram (like on a raspberry
pi), I would prefer to have an external ram drive for swapping.

I have had the rotating media fail once in this configuration - quite
recently. 1TB 5.25", so quite a few years old.  It went through a couple of
months of taking too long to spin up when accessed after a spin down,
requiring a manual unmount to get the system to recognize it again. Then
eventually wouldn't spin up at all. The interesting thing (for me) was that
the SMART data from the drive gave it an all clear right to the end. But
unlike the SSDs, there was plenty of behavioural warning to remind me to
have the backups up to date and a spare at the ready...

So bottom line, in my experience, SSDs are great for read access time, low
power, low noise, and robustness. But they are not good for for
reliability, capacity or usage which is not read-mostly. (and RAID usage is
no substitute for backups - but that is another story)

DigbyT.

On 3 February 2018 at 16:53, hiro <23h...@gmail.com> wrote:

> not so sure about mtbf. but it's too early to tell.
>
>

Reply via email to