Hello tovis & list,

On Mon, Nov 28, 2011 at 08:48:15PM +0100, tovis wrote:
> J. R. Okajima
> > Unfortunately I don't understand what you wrote and what you want to do.
> >
> Short way, I want minimize writes to USB which used as a hard disk, in an
> embedded like equipment, move the life time up to year or more.

Reading this, I strongly feel that I should say something at this point,
since I have some first-hand experience about flash lifetime in
embedded as well as non-embedded environments.

First of all, enhancing lifetime "up to a year or more" just by using
aufs is, in my opinion, totally the wrong way.

Let's have a look at flash memory first. There are two types of flash,
NOR and NAND memory.

NAND memory is cheaper and occupies less space for the same amount of
storage space, compared to NOR. But opposed to NOR, it is organized in
larger chunks which have to be written block-wise, instead of changing
single bytes or words.

NOR can be written up to 10 times more often than NAND before a cell
fails, so if NAND makes it up to 100000 write cycles, NOR will be up to
1M write cycles. Cheap/usual flash drives are usually based on NAND.

Now if the data would just be written "as is" with a fixed layout, the
upper blocks containing the file allocation table would be written very
often (also the metadata such as modification and access time and file
sizes), while the majority of later blocks containing the real data will
just be used read-only automatically unless something changes. Just
using a fixed block layout and FAT or ext2 directly on the media, flash
would wear out very quickly, which is probably what you had in mind.

Now there are different approaches to compensate for this limited
lifetime, one on the hardware and one on the software layer.

(Hardware) controller approach
------------------------------

The easiest but more expensive way for embedded devices is an
"intelligent" controller that reorganizes block addressing through a
wear leveling algorithm, so instead of rewriting block zero very often,
a new physical block is used and the "obsolete" one will be marked as
being free, and only be rewritten after all the other blocks have been
written once. This goes as far as automatically "refreshing" blocks by
copying over blocks that are only read to new places in order to use the
seldomly written block for a new write cycle.

For the developer, controller-driven wear leveling is completely
transparent, i.e. you will never notice that the internal structure of
the block sequence has changed, and the number of total write cycles is
(almost) multiplied by the number of existing blocks.

Apparently, most USB flash drives use this technology to avoid warranty
cases with the FAT-type file systems (mostly used in digital cameras or
video recorders) that have a large number of writes on the superblock
and FAT. If this is the case for the device you are working with, you
most likely will never reach the maximum number of write cycles in the
specified total write-independent lifetime of flash memory, which, by
the way, is about 5 to 10 years in read-only mode, as far as I recall.

Software wear levelling algorithms
----------------------------------

In embedded devices which have to stay "cheap", an intelligent
controller is missing, and you have to use software algorithms in order
to manage defective erase blocks by yourself. For these, wear leveling
file systems such as JFFS2 are used, that use flash memory as kind of
"ring buffer", and automatically use new blocks on consecutive writes
for reaching the same effect as an intelligent controller would do.

IMHO, it does not make much sense to use hardware- and
software-wearlevelling at the same time because of management overhead
resulting in actually MORE writes than necessary. So, you should first
find out of your flash storage has its own controller or not, before
deciding something strategically on the file system layer.

Linux-specific
--------------

Additionally to the concepts above, Linux can reduce the number of
_physical_ writes using its dynamic block buffer cache. The default is
collecting data-to-be-written for about 5 seconds in memory before
synchronizing data (i.e. writing to the physical layer).

This means, if you have about 100 writes to the same block per second,
the Linux block buffer alone will already enhance the lifetime of that
block by a factor of 500, regardless of the wear leveling algorithms
mentioned above!

Because of performance issues with flash (writing large chunks of data
will just sometimes takes longer than 5 seconds), I'm increasing this
number to 30 seconds in Knoppix:

echo 3000 > /proc/sys/vm/dirty_writeback_centisecs

So, before using aufs just because you are afraid of too many writes to
flash, I would consider the effects above.

Other methods and issues
------------------------

>From my own experience, I was not able to kill an usual USB flash disk
or SD card by "excessive" writes yet. I tried hard, really, even with
the mount -o sync option. The german c't magazine has also run write
tests over a year with millions of writes, and was not able to kill a
single block.

In an embedded device without an intelligent flash controller, however,
I'm using JFFS2 or _only_ controlled writes, i.e. I regularly mount the
device read-only, put every changed file on a tmpfs ramdisk (symlinked
configuration files to the tmpfs), and only write changed data back to
flash (or create a configuration tar archive) during a user-initiated
"save configuration" command.

For devices with controllers, poweroff during a write cycle can kill the
flash managements internal structure, and the flash drive can become
inaccessible beyond any possibility of repair. This can happen, for
example, if you remove an SD card while writing to it. This way, I
turned several 8GB flash cards read-only 4MB (!) devices that are not
even partitionable anymore. But this is totally independent from
problems with excessive writes.

A quote from Wikipedia:

  "In practice, flash file systems are only used for memory technology
  devices (MTDs), which are embedded flash memories that do not have a
  controller. Removable flash memory cards and USB flash drives have
  built-in controllers to perform wear leveling and error correction so
  use of a specific flash file system does not add any benefit."

Summary: using aufs just for saving write cycles on flash, is probably
the wrong approach, yet it may be more comfortable ("lazy"?) than using
a device-specific method of wear-levelling or controlled writes.

> To reach this I have to use aufs, to unify some directories and files from
> /var of the system. But I need to keep some important, not to recently
> written configuration files, such as alsamixer settings, or crontab. With
> your valuable work and help I'm close :D
> (Some days ago I have been tested the wrong configuration some days there
> was about 30 writes, usual USB can handle about 10000. I think it's good
> for me.)

If you are able to kill an usual USB flash drive by writing "too many
times" to it, maybe using a method that I did not know yet, please tell
me where I can buy such a one for tests. An usual flash drive that you
buy in a regular shop will handle several million file writes, and you
won't even notice defective blocks because they are also mapped out by
the internal controller and replaced by blocks on a "spare block" list.

Regards
-Klaus Knopper
PS: Now this got somewhat lengthy, sorry. I may reuse this explanation for
an article later.

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d

Reply via email to