Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-25 Thread Mikael
For having a *guaranteedly intact* storage, what is the way then?

This is with the background of recent discussions that touched on
https://www.usenix.org/legacy/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/index.html
and https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/ .



What about having *two SSD*:s in softraid RAID1, and as soon as any IO
failure is found on either SSD, that one would be replaced?

If the underlying read operations are made from both SSD:s each time and
the machine has ECC RAM (??and UFS is checksummed enough??), then at least
the OS would be able to detect corruption (??, fix anything??) and return
proper read failures (or sigsegv) properly.

Mikael

2015-06-18 16:23 GMT+07:00 Karel Gardas gard...@gmail.com:

 On Thu, Jun 18, 2015 at 9:08 AM, David Dahlberg
 david.dahlb...@fkie.fraunhofer.de wrote:
  Am Donnerstag, den 18.06.2015, 02:15 +0530 schrieb Mikael:
 
  2015-06-18 2:07 GMT+05:30 Gareth Nelson gar...@garethnelson.com:
  No I meant, you plug in a 2TB SSD and a 2TB magnet HD, is there any way
 to
  make them properly mirror each other [so the SSD performance is
 delivered
  while the magnet disk safeguards contents] - would you use softraid
 here?
 
  No. If you use a RAID1, you'll get the performance of the worse of both
  disks. To support multiple disks with different characteristics and to
  get the most out of it was AFAIK one of motivations for Matthew Dillon
  to write HAMMER.
 

 I'm not sure about RAID1 in general, but I'm reading softraid code
 recently and based on it I would claim that you get write performance
 of the slowest drive (assuming OpenBSD schedule writes to different
 drives in parallel), but read performance slightly higher than slower
 drive since the read is done in round-robin fashion hence SSD will
 speed it a little bit.

 Anyway, the interesting question is if it makes sense to balance this
 interleaving reading based on actual drive performance. AFAIK this
 should be possible, but IMHO it'll not be that reliable, i.e. it'll
 not provide that much of added reliability. Since reliability is my
 concern, I'm more looking forward to see kind of virtual drive with
 implemented block checksumming in OpenBSD, that IMHO will provide some
 added reliability when run for example in RAID1 setup.

 Karel



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-25 Thread Karel Gardas
On Thu, Jun 25, 2015 at 12:57 PM, Mikael mikael.tr...@gmail.com wrote:
 For having a *guaranteedly intact* storage, what is the way then?

 This is with the background of recent discussions that touched on
 https://www.usenix.org/legacy/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/index.htmland
 https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/ .



 What about having *two SSD*:s in softraid RAID1, and as soon as any IO
 failure is found on either SSD, that one would be replaced?

 If the underlying read operations are made from both SSD:s each time and the
 machine has ECC RAM (??and UFS is checksummed enough??), then at least the
 OS would be able to detect corruption (??, fix anything??) and return proper
 read failures (or sigsegv) properly.

I'm afraid that as far as SSD is not signalling any issue you may end
with corrupted data in the RAM and even softraid RAID1 will not help
you. AFAIK FFS does not provide any checksumming support for user data
so this is the same issue again. I've tinkering with an idea to
enhance softraid RAID1 with checksumming support. Currently reading
papers and code to grasp some knowledge about the topic. The thread is
here: https://www.marc.info/?l=openbsd-techm=143447306012773w=1 --
if you are quicker than me implementing it, then great! I'll probably
switch to some other task in OpenBSD domain. :-)

Cheers,
Karel



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-20 Thread frantisek holop
Chris Cappuccio, 19 Jun 2015 09:59:
 The problem identified in this article is _NOT_ TRIM support. It's
 QUEUED TRIM support. It's an exotic firmware feature that is BROKEN.
 Suffice to say, if Windows doesn't exercise an exotic feature in PC
 hardware, it may not be well tested by anybody!

the author has clarified in the comments bellow the
article that TRIM was the issue and not QUEUED TRIM.

-f
-- 
you have 2 choices for dinner -- take it or leave it.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-19 Thread Chris Cappuccio
Mikael [mikael.tr...@gmail.com] wrote:
 2015-06-18 2:07 GMT+05:30 Gareth Nelson gar...@garethnelson.com:
 
  On point 3, hybrid SSD drives usually just present a standard IDE
  interface - just use a SATA controller and you don't need to worry about it
 
 
 No I meant, you plug in a 2TB SSD and a 2TB magnet HD, is there any way to
 make them properly mirror each other [so the SSD performance is delivered
 while the magnet disk safeguards contents] - would you use softraid here?

You would do nightly backups. RAID 1 would limit your write performance to
that of the HDD.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-19 Thread Chris Cappuccio
Karel Gardas [gard...@gmail.com] wrote:
 Honestly with ~20% provision, once your SSD starts to shrink down,
 it's already good enough to be put into dustbin.
 

The recent SSD endurance reviews on the review sites seem to show
that it takes a long, long, long time before the modern SSD indicates 
that it has to remap blocks due to errors, except for the Samsung
TLC drives, which operate for a long time in this state. Most drives
appear to indicate few to no remaps until they are close to the
end of their useful life.

 Another question is of this buggy TRIM, but I'm afraid this may be
 hard fight even with replication and checksumming filesystems
 (ZFS/HAMMER/BTRFS).
 

The problem identified in this article is _NOT_ TRIM support. It's
QUEUED TRIM support. It's an exotic firmware feature that is BROKEN.
Suffice to say, if Windows doesn't exercise an exotic feature in PC
hardware, it may not be well tested by anybody!

Queued TRIM support is overkill. Regular TRIM support could be
achieved by just telling the drive which blocks are to be zeroed
during idle times with a stand-alone utility. That is the most reliable
way to use TRIM on all drives with the current state-of-the-art



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-19 Thread andrew fabbro
On Wed, Jun 17, 2015 at 8:27 PM, Nick Holland
n...@holland-consulting.net wrote:
 been meaningless for some time).  When the disk runs out of places to
 write the good data, it throws a permanent write error back to the OS
 and you have a really bad day.  The only difference in this with SSDs is
 the amount of storage dedicated to this (be scared?).

I'm guessing that spare space management is typically handled
entirely within the drive and is not exposed as an API, right?

In other words, you can't say to the drive you say you're out of
spare space, but let's take this space here that I'm not using and use
those as new spare space so I can keep using this drive with a reduced
capacity.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-19 Thread Nick Holland

On 06/19/15 13:38, andrew fabbro wrote:

On Wed, Jun 17, 2015 at 8:27 PM, Nick Holland
n...@holland-consulting.net wrote:

been meaningless for some time).  When the disk runs out of places to
write the good data, it throws a permanent write error back to the OS
and you have a really bad day.  The only difference in this with SSDs is
the amount of storage dedicated to this (be scared?).


I'm guessing that spare space management is typically handled
entirely within the drive and is not exposed as an API, right?


right.  Just like a magnetic disk.


In other words, you can't say to the drive you say you're out of
spare space, but let's take this space here that I'm not using and use
those as new spare space so I can keep using this drive with a reduced
capacity.


right.  Just like a magnetic disk.


Really.  Not much new here, just faster.

Seems the more people try to do special things for SSDs, the more they 
get into trouble.  Stop.  Just treat the SSD as a really fast disk, and 
you will be happy.


SSDs -- over all -- will probably last for the first life (about three 
years) of a computer like rotating rust (i.e., some failures, but 
nothing too surprising).  For recycled hw, well, let's see how it works 
out, but I very often replace disks anyway on recycled computers -- that 
once huge 120G disk is not impressive any more and using the old disks 
on things that don't matter much.  With the current crop of sub $100US 
2T disks, I wonder how long they will last, too.


Nick.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-18 Thread Christian Weisgerber
On 2015-06-18, Nick Holland n...@holland-consulting.net wrote:

 The SSD has some number of spare storage blocks.  When it finds a bad
 block, it locks out the bad block and swaps in a good block.

 Curiously -- this is EXACTLY how modern spinning rust hard disks have
 worked for about ... 20 years

Easily 25, for SCSI disks.

 Now, in both cases, this is assuming the drive fails in the way you
 expect -- that the flaw will be spotted on immediate read-after-write,
 while the data is still in the disk's cache or buffer.  There is more
 than one way magnetic disks fail, there's more than one way SSDs fail.
 People tend to hyperventilate over the one way and forget all the rest.

They also tend to forget that magnetic disks also corrupt data, or
never write it, or write it to the wrong place on disk.  Time to
remind people of this great paper:

An Analysis of Data Corruption in the Storage Stack
https://www.usenix.org/legacy/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/index.html

If nothing else, read section 2.3 Corruption Classes.  It should
scare the bejesus out of you.

-- 
Christian naddy Weisgerber  na...@mips.inka.de



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-18 Thread Karel Gardas
On Thu, Jun 18, 2015 at 1:53 PM, Christian Weisgerber
na...@mips.inka.de wrote:
 They also tend to forget that magnetic disks also corrupt data, or
 never write it, or write it to the wrong place on disk.  Time to
 remind people of this great paper:

 An Analysis of Data Corruption in the Storage Stack
 https://www.usenix.org/legacy/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/index.html

 If nothing else, read section 2.3 Corruption Classes.  It should
 scare the bejesus out of you.

Nice text! I especially like 6.2 Lessons Learned, thanks for sharing!

Karel



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-18 Thread David Dahlberg
Am Donnerstag, den 18.06.2015, 02:15 +0530 schrieb Mikael:

 2015-06-18 2:07 GMT+05:30 Gareth Nelson gar...@garethnelson.com:
 No I meant, you plug in a 2TB SSD and a 2TB magnet HD, is there any way to
 make them properly mirror each other [so the SSD performance is delivered
 while the magnet disk safeguards contents] - would you use softraid here?

No. If you use a RAID1, you'll get the performance of the worse of both
disks. To support multiple disks with different characteristics and to
get the most out of it was AFAIK one of motivations for Matthew Dillon
to write HAMMER.


-- 
David Dahlberg 

Fraunhofer FKIE, Dept. Communication Systems (KOM) | Tel: +49-228-9435-845
Fraunhoferstr. 20, 53343 Wachtberg, Germany| Fax: +49-228-856277



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-18 Thread Karel Gardas
On Thu, Jun 18, 2015 at 9:08 AM, David Dahlberg
david.dahlb...@fkie.fraunhofer.de wrote:
 Am Donnerstag, den 18.06.2015, 02:15 +0530 schrieb Mikael:

 2015-06-18 2:07 GMT+05:30 Gareth Nelson gar...@garethnelson.com:
 No I meant, you plug in a 2TB SSD and a 2TB magnet HD, is there any way to
 make them properly mirror each other [so the SSD performance is delivered
 while the magnet disk safeguards contents] - would you use softraid here?

 No. If you use a RAID1, you'll get the performance of the worse of both
 disks. To support multiple disks with different characteristics and to
 get the most out of it was AFAIK one of motivations for Matthew Dillon
 to write HAMMER.


I'm not sure about RAID1 in general, but I'm reading softraid code
recently and based on it I would claim that you get write performance
of the slowest drive (assuming OpenBSD schedule writes to different
drives in parallel), but read performance slightly higher than slower
drive since the read is done in round-robin fashion hence SSD will
speed it a little bit.

Anyway, the interesting question is if it makes sense to balance this
interleaving reading based on actual drive performance. AFAIK this
should be possible, but IMHO it'll not be that reliable, i.e. it'll
not provide that much of added reliability. Since reliability is my
concern, I'm more looking forward to see kind of virtual drive with
implemented block checksumming in OpenBSD, that IMHO will provide some
added reliability when run for example in RAID1 setup.

Karel



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Mariano Ignacio Baragiola

On 17/06/15 08:05, frantisek holop wrote:

https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/

also note the part relating to ext4:

I have to admit, I slept better before reading the
changelog.


fast, features, realiable: pick any 2.

-f



I don't think TRIM is to blame here. I don't understand
why someone on their sane mind would use latests versions
of Ubuntu and Linux for servers. And yes, I know Ubuntu
for Servers is a thing, and yes, I know the fight this
instability with redundancy, but stil...

About EXT4: it is not exactly the most trust-worthy filesystem
there is.

Interesting reading, though.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Theo de Raadt
 1) From the article, what can we see that Ext4/Linux actually did wrong
 here? - Is it that the TRUNCATE command should be abandoned completely, or
 was it how it matched supported/unsupported drives, or something else?

Mariano was being a jerk by assuming it is a bug in ext4 or other
code.  Bringing his biases to the table, perhaps?

The problem is simple and comes as no surprise to many:

Until quite recently, many SSD drives had bugs in their TRIM support,
probably because TRIM was underutilized by operating systems.  Even when
operating systems use TRIM, they care rather cautious, because [extremely
long explanation full of pain deleted].

So it is not Linux filesystem code.  The phase of the moon is not
linked to these problems either.

 2) General on SSD: When an SSD starts to shrink because it starts to wear
 out, how is this handled and how does this appear to the OS, logs, and
 system software?

Invisible.  Even when a few drives make it visible in some way, it is
highly proprietary.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Mikael
Wait, just for my (and I guesss some others') clarity, three questions here:

1) From the article, what can we see that Ext4/Linux actually did wrong
here? - Is it that the TRUNCATE command should be abandoned completely, or
was it how it matched supported/unsupported drives, or something else?


2) General on SSD: When an SSD starts to shrink because it starts to wear
out, how is this handled and how does this appear to the OS, logs, and
system software?


3) On OBSD, how would you generally suggest to make a magnet-SSD hybrid
disk setup where the SSD gives the speed and maget storage security?


Thanks!



2015-06-17 23:17 GMT+05:30 Mariano Ignacio Baragiola 
mari...@baragiola.com.ar:

 On 17/06/15 08:05, frantisek holop wrote:

 https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/

 also note the part relating to ext4:

 I have to admit, I slept better before reading the
 changelog.


 fast, features, realiable: pick any 2.

 -f


 I don't think TRIM is to blame here. I don't understand
 why someone on their sane mind would use latests versions
 of Ubuntu and Linux for servers. And yes, I know Ubuntu
 for Servers is a thing, and yes, I know the fight this
 instability with redundancy, but stil...

 About EXT4: it is not exactly the most trust-worthy filesystem
 there is.

 Interesting reading, though.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Mikael
2015-06-18 2:07 GMT+05:30 Gareth Nelson gar...@garethnelson.com:

 On point 3, hybrid SSD drives usually just present a standard IDE
 interface - just use a SATA controller and you don't need to worry about it


No I meant, you plug in a 2TB SSD and a 2TB magnet HD, is there any way to
make them properly mirror each other [so the SSD performance is delivered
while the magnet disk safeguards contents] - would you use softraid here?



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Karel Gardas
Honestly with ~20% provision, once your SSD starts to shrink down,
it's already good enough to be put into dustbin.

Another question is of this buggy TRIM, but I'm afraid this may be
hard fight even with replication and checksumming filesystems
(ZFS/HAMMER/BTRFS).

Cheers,
Karel

On Wed, Jun 17, 2015 at 10:30 PM, Mikael mikael.tr...@gmail.com wrote:
 2015-06-18 0:53 GMT+05:30 Theo de Raadt dera...@cvs.openbsd.org:

  2) General on SSD: When an SSD starts to shrink because it starts to wear
  out, how is this handled and how does this appear to the OS, logs, and
  system software?

 Invisible.  Even when a few drives make it visible in some way, it is
 highly proprietary.


 What is then proper behavior for a program or system using an SSD, to deal
 with SSD degradation?:

 So say you have a program altering a file's contents all the time, or you
 have file turnover on a system (rm f123; echo importantdata  f124). At
 some point the SSD will shrink and down the line reach zero capacity.


 The degradation process will be such that there will be no file content
 loss as long as the shrinking doesn't exceed the FS total files size right?
 (Spontaneously I'd presume the SSD informs the OS of shrinking at sector
 write time through failing sector writes, and the OS registers shrunk parts
 in the FS as broken sectors.)


 Will the SSD+OS reflect the degradation status in getfsstat(2) f_blocks /
 df(1) blocks so the program just ensure there's some f_bavail / avail all
 the time by simply shrinking (ftruncate etc.) its files accordingly and
 when f_blocks is too small shuts down completely?


 3) On OBSD, how would you generally suggest to make a magnet-SSD hybrid
 disk setup where the SSD gives the speed and maget storage security?



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Gareth Nelson
If I wanted a setup like that i'd just use RAID, note the obvious - write
performance will be the same (or possibly slightly slower due to the added
RAID layer)

---
“Lanie, I’m going to print more printers. Lots more printers. One for
everyone. That’s worth going to jail for. That’s worth anything.” -
Printcrime by Cory Doctrow

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html

On Wed, Jun 17, 2015 at 9:45 PM, Mikael mikael.tr...@gmail.com wrote:



 2015-06-18 2:07 GMT+05:30 Gareth Nelson gar...@garethnelson.com:

 On point 3, hybrid SSD drives usually just present a standard IDE
 interface - just use a SATA controller and you don't need to worry about
it


 No I meant, you plug in a 2TB SSD and a 2TB magnet HD, is there any way to
 make them properly mirror each other [so the SSD performance is delivered
 while the magnet disk safeguards contents] - would you use softraid here?



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Mikael
2015-06-18 0:53 GMT+05:30 Theo de Raadt dera...@cvs.openbsd.org:

  2) General on SSD: When an SSD starts to shrink because it starts to wear
  out, how is this handled and how does this appear to the OS, logs, and
  system software?

 Invisible.  Even when a few drives make it visible in some way, it is
 highly proprietary.


What is then proper behavior for a program or system using an SSD, to deal
with SSD degradation?:

So say you have a program altering a file's contents all the time, or you
have file turnover on a system (rm f123; echo importantdata  f124). At
some point the SSD will shrink and down the line reach zero capacity.


The degradation process will be such that there will be no file content
loss as long as the shrinking doesn't exceed the FS total files size right?
(Spontaneously I'd presume the SSD informs the OS of shrinking at sector
write time through failing sector writes, and the OS registers shrunk parts
in the FS as broken sectors.)


Will the SSD+OS reflect the degradation status in getfsstat(2) f_blocks /
df(1) blocks so the program just ensure there's some f_bavail / avail all
the time by simply shrinking (ftruncate etc.) its files accordingly and
when f_blocks is too small shuts down completely?


3) On OBSD, how would you generally suggest to make a magnet-SSD hybrid
disk setup where the SSD gives the speed and maget storage security?



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Nick Holland
On 06/17/15 16:30, Mikael wrote:
 2015-06-18 0:53 GMT+05:30 Theo de Raadt dera...@cvs.openbsd.org:
 
  2) General on SSD: When an SSD starts to shrink because it starts to wear
  out, how is this handled and how does this appear to the OS, logs, and
  system software?

 Invisible.  Even when a few drives make it visible in some way, it is
 highly proprietary.

 
 What is then proper behavior for a program or system using an SSD, to deal
 with SSD degradation?:

replace drive before it is an issue.

 So say you have a program altering a file's contents all the time, or you
 have file turnover on a system (rm f123; echo importantdata  f124). At
 some point the SSD will shrink and down the line reach zero capacity.

That's not how it works.

The SSD has some number of spare storage blocks.  When it finds a bad
block, it locks out the bad block and swaps in a good block.

Curiously -- this is EXACTLY how modern spinning rust hard disks have
worked for about ... 20 years (yeah.  The pre-modern disks were more
exciting).  Write, verify, if error on verify, write to another storage
block, remap new block to old logical location.  Nothing new here  (this
is why people say that heads, cylinders and sectors per track have
been meaningless for some time).  When the disk runs out of places to
write the good data, it throws a permanent write error back to the OS
and you have a really bad day.  The only difference in this with SSDs is
the amount of storage dedicated to this (be scared?).

Neither SSDs nor magnetic disks shrink to the outside world.  The
moment they need a replacement block that doesn't exist, the disk has
lost data for you and you should call it failed...it has not shrunk.

Now, in both cases, this is assuming the drive fails in the way you
expect -- that the flaw will be spotted on immediate read-after-write,
while the data is still in the disk's cache or buffer.  There is more
than one way magnetic disks fail, there's more than one way SSDs fail.
People tend to hyperventilate over the one way and forget all the rest.

Run your SSDs in production servers for two or three years, then swap
them out.  That's about the warranty on the entire box.  The people that
believe in the manufacturer's warranty being the measure of suitability
for production replace their machines then anyway.  Zero your SSDs, give
them to your staff to stick in their laptops or game computers, or use
them for experimentation and dev systems after that.  Don't
hyperventilate over ONE mode of failure, the majority of your SSDs that
fail will probably fail for other reasons.

[snip]

 3) On OBSD, how would you generally suggest to make a magnet-SSD hybrid
 disk setup where the SSD gives the speed and maget storage security?

Hybrid disks are a specific thing (or a few specific things) -- magnetic
disks with an SSD cache or magnetic/SSD combos where the first X% of the
disk is SSD, the rest is magnetic (or vise-versa, I guess, but I don't
recall having seen that).  SSD cache, you use like any other disk.
Split mode, you use as multiple partitions, as appropriate.

You clarified this to being about a totally different thing...mirroring
an SSD with a Rotating Rust disk.  At this point, most RAID systems I've
seen do not support a preferred read device.  Maybe they should start
thinking about that.  Maybe they shouldn't -- most applications that
NEED the SSD performance for something other than single user jollies
(i.e., a database server vs. having your laptop boot faster) will
face-plant severely should performance suddenly drop by an order of
magnitude.  In many of these cases, the performance drops to the point
that the system death-spirals as queries come in faster than they are
answered.  (this is why when you have an imbalanced redundant pair of
machines, the faster machine should always be the standby machine, not
the primary.  Sometimes Does the same job just slower is still quite
effectively down).

Nick.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Gareth Nelson
On point 3, hybrid SSD drives usually just present a standard IDE interface
- just use a SATA controller and you don't need to worry about it

---
“Lanie, I’m going to print more printers. Lots more printers. One for
everyone. That’s worth going to jail for. That’s worth anything.” -
Printcrime by Cory Doctrow

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html

On Wed, Jun 17, 2015 at 8:15 PM, Mikael mikael.tr...@gmail.com wrote:

 Wait, just for my (and I guesss some others') clarity, three questions
 here:

 1) From the article, what can we see that Ext4/Linux actually did wrong
 here? - Is it that the TRUNCATE command should be abandoned completely, or
 was it how it matched supported/unsupported drives, or something else?


 2) General on SSD: When an SSD starts to shrink because it starts to wear
 out, how is this handled and how does this appear to the OS, logs, and
 system software?


 3) On OBSD, how would you generally suggest to make a magnet-SSD hybrid
 disk setup where the SSD gives the speed and maget storage security?


 Thanks!



 2015-06-17 23:17 GMT+05:30 Mariano Ignacio Baragiola 
 mari...@baragiola.com.ar:

  On 17/06/15 08:05, frantisek holop wrote:
 
  https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/
 
  also note the part relating to ext4:
 
  I have to admit, I slept better before reading the
  changelog.
 
 
  fast, features, realiable: pick any 2.
 
  -f
 
 
  I don't think TRIM is to blame here. I don't understand
  why someone on their sane mind would use latests versions
  of Ubuntu and Linux for servers. And yes, I know Ubuntu
  for Servers is a thing, and yes, I know the fight this
  instability with redundancy, but stil...
 
  About EXT4: it is not exactly the most trust-worthy filesystem
  there is.
 
  Interesting reading, though.



Re: when SSDs are not so solid or why no TRIM support can be a good thing :)

2015-06-17 Thread Gareth Nelson
Paranoia over SSDs messing up is why I got hybrid drives - still get a
decent performance boost but all my data is on good old fashioned magnetic
platters

---
“Lanie, I’m going to print more printers. Lots more printers. One for
everyone. That’s worth going to jail for. That’s worth anything.” -
Printcrime by Cory Doctrow

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html

On Wed, Jun 17, 2015 at 12:05 PM, frantisek holop min...@obiit.org wrote:

 https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/

 also note the part relating to ext4:

 I have to admit, I slept better before reading the
 changelog.


 fast, features, realiable: pick any 2.

 -f
 --
 think honk if you're a telepath.