Thats why I described my use case - to make the MTBF figures meaningful.  As
I said, I have my system configured so that most heavy write accesses go to
rotating media. I typically try to have my system partitions mounted read
only, except var and tmp. I am currently using 32GB uSD devices for
raspberry pi based servers, with about 100GB per year in writes (as
reported by iostat), plus perhaps an initial 100GB in writing that occurs
during installation and configuring. The last failure I had was about 3
months ago, on uSD card that had been in use for just under 2 years.

I can only speak of the experience that I have had, but I don't think my
usage is sufficiently atypical to not count as 'in practice' for use as
computer storage (ie I am not talking about  music players and cameras,
phones etc).

The symptom tends to be thousands of widely dispersed bad sectors appearing
almost simulataneously, in this case on a journaling filesystem, so
previously valid information goes bad without any access being made to it.
Perhaps this is wear leveling going wrong when it is trying to move less
frequently used data to parts of the flash which are very worn. Whatever it
is, it seems subjectively to be a much more rapid decline than rotating
media when it starts going wrong, and whereas rotating media usually
returns sporadic read errors when failing, I have found SSD's often
silently return the wrong data when they go bad - which I find particularly
worrying (you wouldn't want to be using a SSD in a RAID, for example).
Consequently I have started using filesystems which checksum data as well
as metadata when using flash based storage.

My usage of 2.5" SATA SSD's has not really been over a long enough period
to get a good feel for how it compares with removable flash media - I would
hope it is more robust. But when used as the only storage in a laptop
environment, I would expect much higher levels or write access than a
specially configured server. I don't pretend to know what 'typical things
people do' would be, but it isn't hard to imagine scenarios that could
result in several GB's a day for a non-technical user - downloading movies
to a laptop to watch while commuting for example... I certainly wouldn't
feel comfortable regularly rebuilding the Linux kernel on a SSD based

So from my experience, I would still tend to go along with Erik's advice
(as relayed by Steve), or perhaps be even more fastidious about backups
when using flash...

On 3 February 2018 at 20:10, Bakul Shah <> wrote:

> On Sat, 03 Feb 2018 18:49:50 +0000 "Digby R.S. Tarvin" <>
> wrote:
> Digby R.S. Tarvin writes:
> >
> > My experience of running normal (read mostly) Linux filesystems on solid
> > state media is that SSD is more robust but far less reliable than
> rotating
> > media.
> >
> > MTBF for rotating media for me has been around 10 years. MTBF for SSD has
> > been about 2. And the SSD always seems to fail catastrophically -
> appearing
> > to work fine one day, then after an inexplicable crash, half the media is
> > unreadable. I assume this is something to do with the wear leveling,
> which
> > successfully gets most of the blocks to wear out at the same time with no
> > warning. If I reformat and reload the SSD to correct all the mysterious
> > corruptions, it last a few weeks, then does the same thing again.
> MTTF doesn't make much sense for SSDs. A 1TB SSD I bought a
> couple years ago has a rating of 300 TB written, an MTBF of 2M
> hours and a 10 year warranty. It can do over 500MB/s of
> sequential writes. If I average 9.5MB/s writes, it will last
> year. If I continuoulsy write 100MB/s, it will last under 35
> days. In contrast the life of an HDD depends on how long it
> has been spinning, seeks, temperature and load/unloads. A disk
> with 5 year warraty will likely last 5 years even if you write
> 100MB/s continuously.
> And consumer HDSC cards such as the ones used in cameras and
> Raspi are much much worse.
> In practice an SSD will last much longer than a HDD since
> average write rates are not high for the typical things people
> do.

Reply via email to