This is an automated email from the ASF dual-hosted git repository.

xiaoxiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/nuttx.git

commit 6448135f65b2f21327f83a5eedaa9aae7c02e3a5
Author: raiden00pl <[email protected]>
AuthorDate: Wed Oct 25 15:21:20 2023 +0200

    Documentation: migrate /fs
---
 Documentation/components/filesystem/binfs.rst      |  30 ++
 Documentation/components/filesystem/cromfs.rst     | 254 +++++++++++
 Documentation/components/filesystem/index.rst      |  57 +++
 Documentation/components/filesystem/mmap.rst       |  79 ++++
 Documentation/components/filesystem/nxffs.rst      | 186 ++++++++
 .../{filesystem.rst => filesystem/partition.rst}   |  87 +---
 Documentation/components/filesystem/procfs.rst     |  51 +++
 Documentation/components/filesystem/smartfs.rst    | 480 +++++++++++++++++++++
 Documentation/components/filesystem/spiffs.rst     |  31 ++
 Documentation/components/filesystem/unionfs.rst    |  96 +++++
 Documentation/components/filesystem/zipfs.rst      |  41 ++
 Documentation/components/index.rst                 |   2 +-
 12 files changed, 1308 insertions(+), 86 deletions(-)

diff --git a/Documentation/components/filesystem/binfs.rst 
b/Documentation/components/filesystem/binfs.rst
new file mode 100644
index 0000000000..0acbc2367a
--- /dev/null
+++ b/Documentation/components/filesystem/binfs.rst
@@ -0,0 +1,30 @@
+============
+``fs/binfs``
+============
+
+This is the binfs file system that allows "fake" execution of NSH built-
+in applications via the file system.  The binfs fs file system can be
+built into the system by enabling::
+
+    CONFIG_BUILTIN=y
+    CONFIG_FS_BINFS=y
+
+It can then be mounted from the NSH command like like::
+
+   mount -t binfs /bin
+
+Example::
+
+  NuttShell (NSH) NuttX-6.31
+  nsh> hello
+  nsh: hello: command not found
+
+  nsh> mount -t binfs /bin
+  nsh> ls /bin
+  ls /bin
+  /bin:
+   hello
+
+  nsh> /bin/hello
+  Hello, World!!
+
diff --git a/Documentation/components/filesystem/cromfs.rst 
b/Documentation/components/filesystem/cromfs.rst
new file mode 100644
index 0000000000..34f794bf36
--- /dev/null
+++ b/Documentation/components/filesystem/cromfs.rst
@@ -0,0 +1,254 @@
+======
+cromfs
+======
+
+Overview
+========
+
+This directory contains the the CROMFS file system.  This is an in-memory
+(meaning no block driver), read-only (meaning that can lie in FLASH) file
+system.  It uses LZF decompression on data only (meta data is not
+compressed).
+
+It accesses the in-memory file system via directory memory reads and, hence,
+can only reside in random access NOR-like FLASH.  It is intended for use
+with on-chip FLASH available on most MCUs (the design could probably be
+extended to access non-random-access FLASH as well, but those extensions
+are not yet in place).
+
+I do not have a good way to measure how much compression we get using LZF.
+I have seen 37% compression reported in other applications, so I have to
+accept that for now.  That means, for example, that you could have a file
+system with 512Kb of data in only 322Kb of FLASH, giving you 190Kb to do
+other things with.
+
+LZF compression is not known for its high compression ratios, but rather
+for fast decompression.  According to the author of the LZF decompression
+routine, it is nearly as fast as a memcpy!
+
+There is also a new tool at /tools/gencromfs.c that will generate binary
+images for the NuttX CROMFS file system and and an example CROMFS file
+system image at apps/examples/cromfs.  That example includes a test file
+system that looks like::
+
+  $ ls -Rl ../apps/examples/cromfs/cromfs
+  ../apps/examples/cromfs/cromfs:
+  total 2
+  -rwxr--r--+ 1 spuda spuda 171 Mar 20 08:02 BaaBaaBlackSheep.txt
+  drwxrwxr-x+ 1 spuda spuda   0 Mar 20 08:11 emptydir
+  -rwxr--r--+ 1 spuda spuda 118 Mar 20 08:05 JackSprat.txt
+  drwxrwxr-x+ 1 spuda spuda   0 Mar 20 08:06 testdir1
+  drwxrwxr-x+ 1 spuda spuda   0 Mar 20 08:10 testdir2
+  drwxrwxr-x+ 1 spuda spuda   0 Mar 20 08:05 testdir3
+  ../apps/examples/cromfs/cromfs/emptydir:
+  total 0
+  ../apps/examples/cromfs/cromfs/testdir1:
+  total 2
+  -rwxr--r--+ 1 spuda spuda 249 Mar 20 08:03 DingDongDell.txt
+  -rwxr--r--+ 1 spuda spuda 247 Mar 20 08:06 SeeSawMargorieDaw.txt
+  ../apps/examples/cromfs/cromfs/testdir2:
+  total 5
+  -rwxr--r--+ 1 spuda spuda  118 Mar 20 08:04 HickoryDickoryDock.txt
+  -rwxr--r--+ 1 spuda spuda 2082 Mar 20 08:10 TheThreeLittlePigs.txt
+  ../apps/examples/cromfs/cromfs/testdir3:
+  total 1
+  -rwxr--r--+ 1 spuda spuda 138 Mar 20 08:05 JackBeNimble.txt
+
+When built into NuttX and deployed on a target, it looks like::
+
+  NuttShell (NSH) NuttX-7.24
+  nsh> mount -t cromfs /mnt/cromfs
+  nsh> ls -Rl /mnt/cromfs
+  /mnt/cromfs:
+   dr-xr-xr-x       0 .
+   -rwxr--r--     171 BaaBaaBlackSheep.txt
+   dr-xr-xr-x       0 emptydir/
+   -rwxr--r--     118 JackSprat.txt
+   dr-xr-xr-x       0 testdir1/
+   dr-xr-xr-x       0 testdir2/
+   dr-xr-xr-x       0 testdir3/
+  /mnt/cromfs/emptydir:
+   drwxrwxr-x       0 .
+   dr-xr-xr-x       0 ..
+  /mnt/cromfs/testdir1:
+   drwxrwxr-x       0 .
+   dr-xr-xr-x       0 ..
+   -rwxr--r--     249 DingDongDell.txt
+   -rwxr--r--     247 SeeSawMargorieDaw.txt
+  /mnt/cromfs/testdir2:
+   drwxrwxr-x       0 .
+   dr-xr-xr-x       0 ..
+   -rwxr--r--     118 HickoryDickoryDock.txt
+   -rwxr--r--    2082 TheThreeLittlePigs.txt
+  /mnt/cromfs/testdir3:
+   drwxrwxr-x       0 .
+   dr-xr-xr-x       0 ..
+   -rwxr--r--     138 JackBeNimble.txt
+  nsh>
+
+Everything I have tried works:  examining directories, catting files, etc.
+The "." and ".." hard links also work::
+
+  nsh> cd /mnt/cromfs
+  nsh> cat emptydir/../testdir1/DingDongDell.txt
+  Ding, dong, bell,
+  Pussy's in the well.
+  Who put her in?
+  Little Johnny Green.
+
+  Who pulled her out?
+  Little Tommy Stout.
+  What a naughty boy was that,
+  To try to drown poor pussy cat,
+  Who never did him any harm,
+  And killed the mice in his father's barn.
+
+  nsh>
+
+gencromfs
+=========
+
+The genromfs program can be found in tools/.  It is a single C file called
+gencromfs.c.  It can be built in this way::
+
+    cd tools
+    make -f Makefile.host gencromfs
+
+The genromfs tool used to generate CROMFS file system images.  Usage is
+simple::
+
+    gencromfs <dir-path> <out-file>
+
+Where::
+
+    <dir-path> is the path to the directory will be at the root of the
+      new CROMFS file system image.
+    <out-file> the name of the generated, output C file.  This file must
+      be compiled in order to generate the binary CROMFS file system
+      image.
+
+All of these steps are automated in the apps/examples/cromfs/Makefile.
+Refer to that Makefile as an reference.
+
+Architecture
+============
+
+The CROMFS file system is represented by an in-memory data structure.  This
+structure is a "tree."  At the root of the tree is a "volume node" that
+describes the overall operating system.  Other entities within the file
+system are presented by other types of nodes:  hard links, directories, and
+files.  These nodes are all described in fs/cromfs/cromfs.h.
+
+In addition to general volume information, the volume node provides an
+offset to the the "root directory".  The root directory, like all other
+CROMFS directories is simply a singly linked list of other nodes:  hard link
+nodes, directory nodes, and files.  This list is managed by "peer offsets":
+Each node in the directory contains an offset to its peer in the same
+directory.  This directory list is terminated with a zero offset.
+
+The volume header lies at offset zero.  Hence, any offset to a node or data
+block can be converted to an absolute address in the in-memory CROMFS image
+by simply adding that offset to the well-known address of the volume header.
+
+Each hard link, directory, and file node in the directory list includes
+such a "peer offset" to the next node in the list.  Each node is followed
+by the NUL-terminated name of the node.  Each node also holds an additional
+offset.  Directory nodes contain a "child offset".  That is, the offset to
+the first entry in another singly linked list of nodes comprising the sub-
+directory.
+
+Hard link nodes hold the "link offset" to the node which is the target of
+the link.  The link offset may be an offset to another hard link node, to a
+directory, or to a file node.  The directory link offset would refer the
+first node in singly linked directory list that represents the directory.
+
+File nodes provide file data.  The file name string is followed by a
+variable length list of compressed data blocks.  In this case each
+compressed data block begins with an LZF header as described in
+include/lzf.h.
+
+So, given this description, we could illustrate the sample CROMFS file
+system above with these nodes (where V=volume node, H=Hard link node,
+D=directory node, F=file node, D=Data block)::
+
+  V
+  `- +- H: .
+     |
+     +- F: BaaBaaBlackSheep.txt
+     |  `- D,D,D,...D
+     +- D: emptydir
+     |  |- H: .
+     |  `- H: ..
+     +- F: JackSprat.txt
+     |  `- D,D,D,...D
+     +- D: testdir1
+     |  |- H: .
+     |  |- H: ..
+     |  |- F: DingDongDell.txt
+     |  |  `- D,D,D,...D
+     |  `- F: SeeSawMargorieDaw.txt
+     |     `- D,D,D,...D
+     +- D: testdir2
+     |  |- H: .
+     |  |- H: ..
+     |  |- F: HickoryDickoryDock.txt
+     |  |  `- D,D,D,...D
+     |  `- F: TheThreeLittlePigs.txt
+     |     `- D,D,D,...D
+     +- D: testdir3
+        |- H: .
+        |- H: ..
+        `- F: JackBeNimble.txt
+           `- D,D,D,...D
+
+Where, for example::
+
+  H: ..
+
+    Represents a hard-link node with name ".."
+
+  |
+  +- D: testdir1
+  |  |- H: .
+
+    Represents a directory node named "testdir1".  The first node of the
+    directory list is a hard link with name "."
+
+  |
+  +- F: JackSprat.txt
+  |  `- D,D,D,...D
+
+    Represents f file node named "JackSprat.txt" and is followed by some
+    sequence of compressed data blocks, D.
+
+Configuration
+=============
+
+To build the CROMFS file system, you would add the following to your
+configuration:
+
+1. Enable LZF (The other LZF settings apply only to compression
+   and, hence, have no impact on CROMFS which only decompresses)::
+
+     CONFIG_LIBC_LZF=y
+
+   NOTE: This should be selected automatically when CONFIG_FS_CROMFS
+   is enabled.
+
+2. Enable the CROMFS file system::
+
+     CONFIG_FS_CROMFS=y
+
+3. Enable the apps/examples/cromfs example::
+
+     CONFIG_EXAMPLES_CROMFS=y
+
+   Or the apps/examples/elf example if you like::
+
+     CONFIG_ELF=y
+     # CONFIG_BINFMT_DISABLE is not set
+     CONFIG_EXAMPLES_ELF=y
+     CONFIG_EXAMPLES_ELF_CROMFS=y
+
+   Or implement your own custom CROMFS file system that example as a
+   guideline.
diff --git a/Documentation/components/filesystem/index.rst 
b/Documentation/components/filesystem/index.rst
new file mode 100644
index 0000000000..637ba4b903
--- /dev/null
+++ b/Documentation/components/filesystem/index.rst
@@ -0,0 +1,57 @@
+=================
+NuttX File System
+=================
+
+**Overview**. NuttX includes an optional, scalable file system.
+This file-system may be omitted altogether; NuttX does not depend
+on the presence of any file system.
+
+**Pseudo Root File System**. A simple *in-memory*, *pseudo* file
+system can be enabled by default. This is an *in-memory* file
+system because it does not require any storage medium or block
+driver support. Rather, file system contents are generated
+on-the-fly as referenced via standard file system operations
+(open, close, read, write, etc.). In this sense, the file system
+is *pseudo* file system (in the same sense that the Linux
+``/proc`` file system is also referred to as a pseudo file
+system).
+
+Any user supplied data or logic can be accessed via the
+pseudo-file system. Built in support is provided for character and
+block `drivers <#DeviceDrivers>`__ in the ``/dev`` pseudo file
+system directory.
+
+**Mounted File Systems** The simple in-memory file system can be
+extended my mounting block devices that provide access to true
+file systems backed up via some mass storage device. NuttX
+supports the standard ``mount()`` command that allows a block
+driver to be bound to a mountpoint within the pseudo file system
+and to a file system. At present, NuttX supports the standard VFAT
+and ROMFS file systems, a special, wear-leveling NuttX FLASH File
+System (NXFFS), as well as a Network File System client (NFS
+version 3, UDP).
+
+**Comparison to Linux** From a programming perspective, the NuttX
+file system appears very similar to a Linux file system. However,
+there is a fundamental difference: The NuttX root file system is a
+pseudo file system and true file systems may be mounted in the
+pseudo file system. In the typical Linux installation by
+comparison, the Linux root file system is a true file system and
+pseudo file systems may be mounted in the true, root file system.
+The approach selected by NuttX is intended to support greater
+scalability from the very tiny platform to the moderate platform.
+
+
+.. toctree::
+  :maxdepth: 1
+
+  binfs.rst
+  cromfs.rst
+  mmap.rst
+  nxffs.rst
+  procfs.rst
+  smartfs.rst
+  spiffs.rst
+  unionfs.rst
+  zipfs.rst
+  partition.rst
diff --git a/Documentation/components/filesystem/mmap.rst 
b/Documentation/components/filesystem/mmap.rst
new file mode 100644
index 0000000000..7b4c9fba68
--- /dev/null
+++ b/Documentation/components/filesystem/mmap.rst
@@ -0,0 +1,79 @@
+===========
+``fs/mmap``
+===========
+
+NuttX operates in a flat open address space and is focused on MCUs that do
+support Memory Management Units (MMUs).  Therefore, NuttX generally does not
+require mmap() functionality and the MCUs generally cannot support true
+memory-mapped files.
+
+However, memory mapping of files is the mechanism used by NXFLAT, the NuttX
+tiny binary format, to get files into memory in order to execute them.
+mmap() support is therefore required to support NXFLAT.  There are two
+conditions where mmap() can be supported:
+
+1. mmap can be used to support eXecute In Place (XIP) on random access media
+   under the following very restrictive conditions:
+
+   a. The filesystem implements the mmap file operation.  Any file
+      system that maps files contiguously on the media should support
+      this ioctl. (vs. file system that scatter files over the media
+      in non-contiguous sectors).  As of this writing, ROMFS is the
+      only file system that meets this requirement.
+
+   b. The underlying block driver supports the BIOC_XIPBASE ioctl
+      command that maps the underlying media to a randomly accessible
+      address. At  present, only the RAM/ROM disk driver does this.
+
+   Some limitations of this approach are as follows:
+
+   a. Since no real mapping occurs, all of the file contents are "mapped"
+      into memory.
+
+   b. All mapped files are read-only.
+
+   c. There are no access privileges.
+
+2. If CONFIG_FS_RAMMAP is defined in the configuration, then mmap() will
+   support simulation of memory mapped files by copying files whole
+   into RAM.  These copied files have some of the properties of
+   standard memory mapped files.  There are many, many exceptions,
+   however.  Some of these include:
+
+   a. The goal is to have a single region of memory that represents a single
+      file and can be shared by many threads.  That is, given a filename a
+      thread should be able to open the file, get a file descriptor, and
+      call mmap() to get a memory region.  Different file descriptors opened
+      with the same file path should get the same memory region when mapped.
+
+      The limitation in the current design is that there is insufficient
+      knowledge to know that these different file descriptors correspond to
+      the same file.  So, for the time being, a new memory region is created
+      each time that rammap() is called. Not very useful!
+
+   b. The entire mapped portion of the file must be present in memory.
+      Since it is assumed that the MCU does not have an MMU, on-demanding
+      paging in of file blocks cannot be supported. Since the while mapped
+      portion of the file must be present in memory, there are limitations
+      in the size of files that may be memory mapped (especially on MCUs
+      with no significant RAM resources).
+
+   c. All mapped files are read-only.  You can write to the in-memory image,
+      but the file contents will not change.
+
+   d. There are no access privileges.
+
+   e. Since there are no processes in NuttX, all mmap() and munmap()
+      operations have immediate, global effects.  Under Linux, for example,
+      munmap() would eliminate only the mapping with a process; the mappings
+      to the same file in other processes would not be effected.
+
+   f. Like true mapped file, the region will persist after closing the file
+      descriptor.  However, at present, these ram copied file regions are
+      **not** automatically "unmapped" (i.e., freed) when a thread is 
terminated.
+      This is primarily because it is not possible to know how many users
+      of the mapped region there are and, therefore, when would be the
+      appropriate time to free the region (other than when munmap is called).
+
+      NOTE: Note, if the design limitation of a) were solved, then it would be
+      easy to solve exception d) as well.
diff --git a/Documentation/components/filesystem/nxffs.rst 
b/Documentation/components/filesystem/nxffs.rst
new file mode 100644
index 0000000000..c9457eae1c
--- /dev/null
+++ b/Documentation/components/filesystem/nxffs.rst
@@ -0,0 +1,186 @@
+=====
+NXFFS
+=====
+
+This README file contains information about the implementation of the NuttX
+wear-leveling FLASH file system, NXFFS.
+
+General NXFFS organization
+==========================
+
+The following example assumes 4 logical blocks per FLASH erase block.  The
+actual relationship is determined by the FLASH geometry reported by the MTD
+driver::
+
+  ERASE LOGICAL                   Inodes begin with a inode header.  inode may
+  BLOCK BLOCK       CONTENTS      be marked as "deleted," pending re-packing.
+    n   4*n     --+--------------+
+                  |BBBBBBBBBBBBBB| Logic block header
+                  |IIIIIIIIIIIIII| Inodes begin with a inode header
+                  |DDDDDDDDDDDDDD| Data block containing inode data block
+                  | (Inode Data) |
+        4*n+1   --+--------------+
+                  |BBBBBBBBBBBBBB| Logic block header
+                  |DDDDDDDDDDDDDD| Inodes may consist of multiple data blocks
+                  | (Inode Data) |
+                  |IIIIIIIIIIIIII| Next inode header
+                  |              | Possibly a few unused bytes at the end of a 
block
+        4*n+2   --+--------------+
+                  |BBBBBBBBBBBBBB| Logic block header
+                  |DDDDDDDDDDDDDD|
+                  | (Inode Data) |
+        4*n+3   --+--------------+
+                  |BBBBBBBBBBBBBB| Logic block header
+                  |IIIIIIIIIIIIII| Next inode header
+                  |DDDDDDDDDDDDDD|
+                  | (Inode Data) |
+   n+1  4*(n+1) --+--------------+
+                  |BBBBBBBBBBBBBB| Logic block header
+                  |              | All FLASH is unused after the end of the 
final
+                  |              | inode.
+                --+--------------+
+
+General operation
+=================
+
+Inodes are written starting at the beginning of FLASH.  As inodes are
+deleted, they are marked as deleted but not removed.  As new inodes are
+written, allocations  proceed to toward the end of the FLASH -- thus,
+supporting wear leveling by using all FLASH blocks equally.
+
+When the FLASH becomes full (no more space at the end of the FLASH), a
+re-packing operation must be performed:  All inodes marked deleted are
+finally removed and the remaining inodes are packed at the beginning of
+the FLASH.  Allocations then continue at the freed FLASH memory at the
+end of the FLASH.
+
+Headers
+=======
+
+``BLOCK HEADER``
+    The block header is used to determine if the block has every been
+    formatted and also indicates bad blocks which should never be used.
+
+``INODE HEADER``
+    Each inode begins with an inode header that contains, among other things,
+    the name of the inode, the offset to the first data block, and the
+    length of the inode data.
+
+    At present, the only kind of inode support is a file.  So for now, the
+    term file and inode are interchangeable.
+
+``INODE DATA HEADER``
+    Inode data is enclosed in a data header.  For a given inode, there
+    is at most one inode data block per logical block.  If the inode data
+    spans more than one logical block, then the inode data may be enclosed
+    in multiple data blocks, one per logical block.
+
+NXFFS Limitations
+=================
+
+This implementation is very simple as, as a result, has several limitations
+that you should be aware before opting to use NXFFS:
+
+1. Since the files are contiguous in FLASH and since allocations always
+   proceed toward the end of the FLASH, there can only be one file opened
+   for writing at a time.  Multiple files may be opened for reading.
+
+2. Files may not be increased in size after they have been closed.  The
+   O_APPEND open flag is not supported.
+
+3. Files are always written sequential.  Seeking within a file opened for
+   writing will not work.
+
+4. There are no directories, however, '/' may be used within a file name
+   string providing some illusion of directories.
+
+5. Files may be opened for reading or for writing, but not both: The O_RDWR
+   open flag is not supported.
+
+6. The re-packing process occurs only during a write when the free FLASH
+   memory at the end of the FLASH is exhausted.  Thus, occasionally, file
+   writing may take a long time.
+
+7. Another limitation is that there can be only a single NXFFS volume
+   mounted at any time.  This has to do with the fact that we bind to
+   an MTD driver (instead of a block driver) and bypass all of the normal
+   mount operations.
+
+Multiple Writers
+================
+
+As mentioned in the limitations above, there can be only one file opened
+for writing at a time.  If one thread has a file opened for writing and
+another thread attempts to open a file for writing, then that second
+thread will be blocked and will have to wait for the first thread to
+close the file.
+
+Such behavior may or may not be a problem for your application, depending
+(1) how long the first thread keeps the file open for writing and (2) how
+critical the behavior of the second thread is.  Note that writing to FLASH
+can always trigger a major FLASH reorganization and, hence, there is no
+way to guarantee the first condition: The first thread may have the file
+open for a long time even if it only intends to write a small amount.
+
+Also note that a deadlock condition would occur if the SAME thread
+attempted to open two files for writing.  The thread would would be
+blocked waiting for itself to close the first file.
+
+ioctls
+======
+
+The file system supports to ioctls:
+
+``FIOC_REFORMAT``
+  Will force the flash to be erased and a fresh, empty NXFFS file system to
+  be written on it.
+
+``FIOC_OPTIMIZE``
+  Will force immediate repacking of the file system.  This will avoid the
+  delays to repack the file system in the emergency case when all of the
+  FLASH memory has been used.  Instead, you can defer the garbage collection
+  to time when the system is not busy.  Calling this function on a thrashing
+  file system will increase the amount of wear on the FLASH if you use this
+  frequently!
+
+Things to Do
+============
+
+- The statfs() implementation is minimal.  It should have some calculation
+  of the f_bfree, f_bavail, f_files, f_ffree return values.
+- There are too many allocs and frees.  More structures may need to be
+  pre-allocated.
+- The file name is always extracted and held in allocated, variable-length
+  memory.  The file name is not used during reading and eliminating the
+  file name in the entry structure would improve performance.
+- There is a big inefficiency in reading.  On each read, the logic searches
+  for the read position from the beginning of the file each time.  This
+  may be necessary whenever an lseek() is done, but not in general.  Read
+  performance could be improved by keeping FLASH offset and read positional
+  information in the read open file structure.
+- Fault tolerance must be improved.  We need to be absolutely certain that
+  any FLASH errors do not cause the file system to behavior incorrectly.
+- Wear leveling might be improved (?).  Files are re-packed at the front
+  of FLASH as part of the clean-up operation.  However, that means the files
+  that are not modified often become fixed in place at the beginning of
+  FLASH.  This reduces the size of the pool moving files at the end of the
+  FLASH.  As the file system becomes more filled with fixed files at the
+  front of the device, the level of wear on the blocks at the end of the
+  FLASH increases.
+- When the time comes to reorganization the FLASH, the system may be
+  unavailable for a long time.  That is a bad behavior.  What is needed,
+  I think, is a garbage collection task that runs periodically so that
+  when the big reorganization event occurs, most of the work is already
+  done.  That garbage collection should search for valid blocks that no
+  longer contain valid data.  It should pre-erase them, put them in
+  a good but empty state... all ready for file system re-organization.
+  NOTE:  There is the FIOC_OPTIMIZE IOCTL command that can be used by an
+  application for force garbage collection when the system is not busy.
+  If used judiciously by the application, this can eliminate the problem.
+- And worse, when NXFSS reorganization the FLASH a power cycle can
+  damage the file system content if it happens at the wrong time.
+- The current design does not permit re-opening of files for write access
+  unless the file is truncated to zero length.  This effectively prohibits
+  implementation of a proper truncate() method which should alter the
+  size of a previously written file.  There is some fragmentary logic in
+  place but even this is conditioned out with __NO_TRUNCATE_SUPPORT__.
diff --git a/Documentation/components/filesystem.rst 
b/Documentation/components/filesystem/partition.rst
similarity index 63%
rename from Documentation/components/filesystem.rst
rename to Documentation/components/filesystem/partition.rst
index 5e1dfc6cb7..5451d4d114 100644
--- a/Documentation/components/filesystem.rst
+++ b/Documentation/components/filesystem/partition.rst
@@ -1,51 +1,9 @@
-=================
-NuttX File System
-=================
-
-**Overview**. NuttX includes an optional, scalable file system.
-This file-system may be omitted altogether; NuttX does not depend
-on the presence of any file system.
-
-**Pseudo Root File System**. A simple *in-memory*, *pseudo* file
-system can be enabled by default. This is an *in-memory* file
-system because it does not require any storage medium or block
-driver support. Rather, file system contents are generated
-on-the-fly as referenced via standard file system operations
-(open, close, read, write, etc.). In this sense, the file system
-is *pseudo* file system (in the same sense that the Linux
-``/proc`` file system is also referred to as a pseudo file
-system).
-
-Any user supplied data or logic can be accessed via the
-pseudo-file system. Built in support is provided for character and
-block `drivers <#DeviceDrivers>`__ in the ``/dev`` pseudo file
-system directory.
-
-**Mounted File Systems** The simple in-memory file system can be
-extended my mounting block devices that provide access to true
-file systems backed up via some mass storage device. NuttX
-supports the standard ``mount()`` command that allows a block
-driver to be bound to a mountpoint within the pseudo file system
-and to a file system. At present, NuttX supports the standard VFAT
-and ROMFS file systems, a special, wear-leveling NuttX FLASH File
-System (NXFFS), as well as a Network File System client (NFS
-version 3, UDP).
-
-**Comparison to Linux** From a programming perspective, the NuttX
-file system appears very similar to a Linux file system. However,
-there is a fundamental difference: The NuttX root file system is a
-pseudo file system and true file systems may be mounted in the
-pseudo file system. In the typical Linux installation by
-comparison, the Linux root file system is a true file system and
-pseudo file systems may be mounted in the true, root file system.
-The approach selected by NuttX is intended to support greater
-scalability from the very tiny platform to the moderate platform.
-
+===============
 Partition Table
 ===============
 
 Text based Partition Table
-----------------------------------
+==========================
 
 **Summary**
 
@@ -213,44 +171,3 @@ Blank line && New line delim
     /dev/partition7   offset 0x00480000, size 0x00010000
     /dev/data         offset 0x00500000, size 0x00aff000
     /dev/txtable      offset 0x00fff000, size 0x00001000
-
-ZipFS
-=====
-
-Zipfs is a read only file system that mounts a zip file as a NuttX file system 
through the NuttX VFS interface.
-This allows users to read files while decompressing them, without requiring 
additional storage space.
-
-CONFIG
-------
-
-.. code-block:: c
-
-    CONFIG_FS_ZIPFS=y
-    CONFIG_LIB_ZLIB=y
-
-Example
--------
-
-1. `./tools/configure.sh sim:zipfs` build sim platform with zipfs support.
-
-2. `make` build NuttX.
-
-3. `./nuttx` run NuttX.
-
-4. `nsh> mount -t hostfs -o /home/<your host name>/work /host` mount host 
directory to /host.
-
-5. `nsh> mount -t zipfs -o /host/test.zip /zip` mount zip file to /zipfs.
-
-6. Use cat/ls command to test.
-
-.. code-block:: c
-
-    nsh> ls /zip
-    /zip:
-     a/1
-     a/2
-    nsh> cat /zip/a/1
-    this is zipfs test 1
-    nsh> cat /zip/a/2
-    this is zipfs test 2
-
diff --git a/Documentation/components/filesystem/procfs.rst 
b/Documentation/components/filesystem/procfs.rst
new file mode 100644
index 0000000000..3379dd3d7c
--- /dev/null
+++ b/Documentation/components/filesystem/procfs.rst
@@ -0,0 +1,51 @@
+=============
+``fs/procfs``
+=============
+
+This is a tiny procfs file system that allows read-only access to a few
+attributes of a task or thread.  This tiny procfs fs file system can be
+built into the system by enabling::
+
+    CONFIG_FS_PROCFS=y
+
+It can then be mounted from the NSH command like like::
+
+    nsh> mount -t procfs /proc
+
+Example::
+
+  NuttShell (NSH) NuttX-6.31
+  nsh> mount -t procfs /proc
+
+  nsh> ls /proc
+  /proc:
+   0/
+   1/
+
+  nsh> ls /proc/1
+  /proc/1:
+   status
+   cmdline
+
+  nsh> cat /proc/1/status
+  Name:       init
+  Type:       Task
+  State:      Running
+  Priority:   100
+  Scheduler:  SCHED_FIFO
+  SigMask:    00000000
+
+  nsh> cat /proc/1/cmdline
+  init
+
+  nsh> sleep 100 &
+  sleep [2:100]
+  nsh> ls /proc
+  ls /proc
+  /proc:
+   0/
+   1/
+   2/
+
+  nsh> cat /proc/2/cmdline
+  <pthread> 0x527420
diff --git a/Documentation/components/filesystem/smartfs.rst 
b/Documentation/components/filesystem/smartfs.rst
new file mode 100644
index 0000000000..27d7dd0985
--- /dev/null
+++ b/Documentation/components/filesystem/smartfs.rst
@@ -0,0 +1,480 @@
+=======
+SMARTFS
+=======
+
+This README file contains information about the implementation of the NuttX
+Sector Mapped Allocation for Really Tiny (SMART) FLASH file system, SMARTFS.
+
+Features
+========
+
+This implementation is a full-feature file system from the perspective of
+file and directory access (i.e. not considering low-level details like the
+lack of bad block management).  The SMART File System was designed specifically
+for small SPI based FLASH parts (1-8 Mbyte for example), though this is not
+a limitation.  It can certainly be used for any size FLASH and can work with
+any MTD device by binding it with the SMART MTD layer and has been tested with
+devices as large as 128MByte (using a 2048 byte sector size with 65534 
sectors).
+The FS includes support for:
+
+- Multiple open files from different threads.
+- Open for read/write access with seek capability.
+- Appending to end of files in either write, append or read/write open modes.
+- Directory support.
+- Support for multiple mount points on a single volume / partition (see details
+  below).
+- Selectable FLASH Wear leveling algorithym
+- Selectable CRC-8 or CRC-16 error detection for sector data
+- Reduced RAM model for FLASH geometries with large number of sectors (16K-64K)
+
+General operation
+=================
+
+The SMART File System divides the FLASH device or partition into equal
+sized sectors which are allocated and "released" as needed to perform file
+read/write and directory management operations.  Sectors are then "chained"
+together to build files and directories.  The operations are split into two
+layers:
+
+1.  The MTD block layer (nuttx/drivers/mtd/smart.c).  This layer manages
+    all low-level FLASH access operations including sector allocations,
+    logical to physical sector mapping, erase operations, etc.
+2.  The FS layer (nuttx/fs/smart/smartfs_smart.c).  This layer manages
+    high-level file and directory creation, read/write, deletion, sector
+    chaining, etc.
+
+SMART MTD Block layer
+=====================
+
+The SMART MTD block layer divides the erase blocks of the FLASH device into
+"sectors".  Sectors have both physical and logical number assignments.
+The physicl sector number represents the actual offset from the beginning
+of the device, while the logical sector number is assigned as needed.
+A physical sector can have any logical sector assignment, and as files
+are created, modified and destroyed, the logical sector number assignment
+for a given physical sector will change over time.  The logical sector
+number is saved in the physical sector header as the first 2 bytes, and
+the MTD layer maintains an in-memory map of the logical to physical mapping.
+Only physical sectors that are in use will have a logical assignment.
+
+Also contained in the sector header is a flags byte and a sequence number.
+When a sector is allocated, the COMMITTED flag will be "set" (changed from
+erase state to non-erase state) to indicate the sector data is valid.  When
+a sector's data needs to be deleted, the RELEASED flag will be "set" to
+indicate the sector is no longer in use.  This is done because the erase
+block containing the sector cannot necessarily be erased until all sectors
+in that block have been "released".  This allows sectors in the erase
+block to remain active while others are inactive until a "garbage collection"
+operation is needed on the volume to reclaim released sectors.
+
+The sequence number is used when a logical sector's data needs to be
+updated with new information.  When this happens, a new physical sector
+will be allocated which has a duplicate logical sector number but a
+higher sequence number.  This allows maintaining flash consistency in the
+event of a power failure by writing new data prior to releasing the old.
+In the event of a power failure causing duplicate logical sector numbers,
+the sector with the higher sequence number will win, and the older logical
+sector will be released.
+
+The SMART MTD block layer reserves some logical sector numbers for internal
+use, including::
+
+    Sector 0:     The Format Sector.  Has a format signature, format version, 
etc.
+                  Also contains wear leveling information if enabled.
+    Sector 1-2:   Additional wear-leveling info storage if needed.
+    Sector 3:     The 1st (or only) Root Directory entry
+    Sector 4-10:  Additional root directories when Multi-Mount points are 
supported.
+    Sector 11-12: Reserved
+
+To perform allocations, the SMART MTD block layer searches each erase block
+on the device to identify the one with the most free sectors.  Free sectors
+are those that have all bytes in the "erased state", meaning they have not
+been previously allocated/released since the last block erase.  Not all
+sectors on the device can be allocated ... the SMART MTD block driver must
+reserve at least one erase-block worth of unused sectors to perform
+garbage collection, which will be performed automatically when no free
+sectors are available.  When wear leveling is enabled, the allocator also takes
+into account the erase block erasure status to maintain level wearing.
+
+Garbage collection is performed by identifying the erase block with the most
+"released" sectors (those that were previously allocated but no longer being
+used) and moving all still-active sectors to a different erase block.  Then
+the now "vacant" erase block is erased, thus changing a group of released
+sectors into free sectors.  This may occur several times depending on the
+number of released sectors on the volume such that better "wear leveling"
+is achieved.
+
+Standard MTD block layer functions are provided for block read, block write,
+etc. so that system utilities such as the "dd" command can be used,
+however, all SMART operations are performed using SMART specific ioctl
+codes to perform sector allocate, sector release, sector write, etc.
+
+A couple of config items that the SMART MTD layer can take advantage of
+in the underlying MTD drivers is SUBSECTOR_ERASE and BYTE_WRITE.  Most
+flash devices have a 32K to 128K Erase block size, but some of them
+have a smaller erase size available also.  Vendors have different names
+for the smaller erase size; In the NuttX MTD layer it is called
+SUBSECTOR_ERASE.  For FLASH devices that support the smaller erase size,
+this configuration item can be added to the underlying MTD driver, and
+SMART will use it.  As of the writing of this README, only the
+drivers/mtd/m25px.c driver had support for SUBSECTOR_ERASE.
+
+The BYTE_WRITE config option enables use of the underlying MTD driver's
+ability to write data a byte or a few bytes at a time vs. a full page
+at at time (which is typically 256 bytes).  For FLASH devices that support
+byte write mode, support for this config item can be added to the MTD
+driver.  Enabling and supporting this feature reduces the traffic on the
+SPI bus considerably because SMARTFS performs many operations that affect
+only a few bytes on the device.  Without BYTE_WRITE, the code must
+perform a full page read-modify-write operation on a 256 or even 512
+byte page.
+
+Wear Leveling
+=============
+
+When wear leveling is enabled, the code automatically writes data across
+the entire FLASH device in a manner that causes each erase block to be
+worn (i.e. erased) evenly.  This is accomplished by maintaining a 4-bit
+wear level count for each erase block and forcing less worn blocks to be
+used for writing new data.  The code maintains each block's erase count
+to be within 16 erases of each other, though through testing, the span
+so far was never greater than 10 erases of each other.
+
+As the data in a block is modified repeatedly, the erase count will
+increase.  When the wear level reaches a value of 8 or higher, and the block
+needs to be erased (because the data in it has been modified, etc.) the code
+will select an erase block with the lowest wear count and relocate it to
+this block (with the higher wear count).  The idea being that a block with
+the lowest wear count contains more "static" data and should require fewer
+additional erase operations.  This relocation process will continue on the
+block (only when it needs to be erased again).
+
+When the wear level of all erase blocks has increased to a level of
+SMART_WEAR_MIN_LEVEL (currently set to 5), then the wear level counts
+will all be reduced by this value.  This keeps the wear counts normalized
+so they fit in a 4-bit value.  Note that theoretically, it *IS* possible to
+write data to the flash in a manner that causes the wear count of a single
+erase block to increment beyond it's maximum value of 15.  This would have
+to be a very, very, very specific and un-predictable write sequence though
+as data is always spread out across the sectors and relocated dynamically.
+In the extremely rare event this does occur, the code will automatically
+cap the maximum wear level at 15 an increment an "uneven wear count"
+variable to indicate the number times this event has occurred.  So far, I
+have not been able to get the wear count above 10 though my testing.
+
+The wear level status bits are saved in the format sector (logical sector
+number zero) with overflow saved in the reserved logical sectors one and
+two.  Additionally, the uneven wear count (and total block erases if
+PROCFS is enabled) are stored in the format sector.  When the PROCFS file
+system is enabled and a SMARTFS volume is mounted, the SMART block driver
+details and / or wear level details can be viewed with a command such as::
+
+     cat /proc/fs/smartfs/smart0/status
+        Format version:    1
+        Name Len:          16
+        Total Sectors:     2048
+        Sector Size:       512
+        Format Sector:     1487
+        Dir Sector:        8
+        Free Sectors:      67
+        Released Sectors:  572
+        Unused Sectors:    817
+        Block Erases:      5680
+        Sectors Per Block: 8
+        Sector Utilization:98%
+        Uneven Wear Count: 0
+
+     cat /proc/fs/smartfs/smart0/erasemap
+        DDDCGCCDDCDCCDCBDCCDDGBBDBCDCCDDDCDDDDCCDDCCCGCGDCCDBCDDGBDBDCDD
+        BCCCDDCCDDDCBCCDGCCCBDDCCGBBCBCCGDCCDCBDBCCCDCDDCDDGCDCGDCBCDBDG
+        BCDDCDCBGCCCDDCGBCCGBCCBDDBDDCGDCDDDCGCDDBCDCBDDBCDCGDDCCBCGBCCC
+        GCBCCGCCCDDDBGCCCCGDCCCCCDCDDGBBDACABDBBABCAABCCCDAACBADADDDAECB
+
+Enabling wear leveling can increase the total number of block erases on the
+device in favor of even wearing (erasing).  This is caused by writing /
+moving sectors that otherwise don't need to be written to move static data
+to the more highly worn blocks.  This additional write requirement is known
+as write amplification.  To get an idea of the amount of write amplification
+incurred by enabling wear leveling, I conducted the smart_test example using
+four different configurations (wear, no wear, CRC-8, no CRC) and the results
+are shown below.  This was done on a 1M Byte simulated FLASH with 4K erase
+block size, 512 sectors per byte.  The smart_test creates a 700K file and
+then performs 20,000 random seek, write, verify tests.  The seek write forces
+a multitude of sector relocation operations (with or without CRC enabled),
+causing a boatload of block erases.
+
+Enabling wear leveling actually decreased the number of erase operations
+with CRC enabled or disabled.  This is only a single test point based one
+testing method ... results will likely vary based on the method the data
+is written, the amount of static vs. dynamic data, the amount of free space
+on the volume, and the volume geometry (erase block size, sector size, etc.).
+
+The results of the tests are::
+
+    Case                          Total Block erases
+    ================================================
+    No wear leveling     CRC-8         6632
+    Wear leveling        CRC-8         5585
+
+    No wear leveling     no CRC        6658
+    Wear leveling        no CRC        5398
+
+Reduced RAM model
+=================
+
+On devices with a larger number of logical sectors (i.e. a lot of erase
+blocks with a small selected sector size), the RAM requirement can become
+fairly significant.  This is caused by the in-memory sector map which
+keeps track of the logical to physical mapping of all sectors.  This is
+a RAM array which is 2 * totalsectors in size.  For a device with 64K
+sectors, this means 128K of RAM is required just for the sector map, not
+counting RAM for read/write buffers, erase block management, etc.
+
+So a reduced RAM model has been added which only keeps track of which
+logical sectors have been used (a table which is totalsectors / 8 in size)
+and a configurable sized sector map cache.  Each entry in the sector map
+cache is 6 bytes (logical sector, physical sector and cache entry age).
+ON DEVICES WITH SMALLER TOTAL SECTOR COUNT, ENABLING THIS OPTION COULD
+ACTUALLY INCREASE THE RAM FOOTPRINT INSTEAD OF REDUCE IT.
+
+The sector map cache size should be selected to balance the desired RAM
+usage and the file system performance.  When a logical to physical sector
+mapping is not found in the cache, the code must perform a physical search
+of the FLASH to find the requested logical sector.  This involves reading
+the 5-byte header from each sector on the device until the sector is
+found.  Performing a full read, seek or open for append on a large file
+can cause the sector map cache to flush completely if the file is larger
+than (cache entries * sector size).  For example, in a configuration with
+256 cache entries and a 512 byte sector size, a full read, seek or open for
+append on a 128K file will flush the cache.
+
+An additional RAM savings is realized on FLASH parts that contain 16 or
+fewer logical sectors per erase block by packing the free and released
+sector counts into a single byte (plus a little extra for 16 sectors per
+erase block).  A device with a 64K erase block size can benefit from this
+savings by selecting a 4096 or 8192 byte logical sector size, for example.
+
+SMART FS Layer
+==============
+
+This layer interfaces with the SMART MTD block layer to allocate / release
+logical sectors, create and destroy sector chains, and perform directory and
+file I/O operations.  Each directory and file on the volume is represented
+as a chain or "linked list" of logical sectors.  Thus the actual physical
+sectors that a give file or directory uses does not need to be contiguous
+and in fact can (and will) move around over time.  To manage the sector
+chains, the SMARTFS layer adds a "chain header" after the sector's "sector
+header".  This is a 5-byte header which contains the chain type (file or
+directory), a "next logical sector" entry and the count of bytes actually
+used within the sector.
+
+Files are stored in directories, which are sector chains that have a
+specific data format to track file names and "first" logical sector
+numbers.  Each file in the directory has a fixed-size "directory entry"
+that has bits to indicate if it is still active or has been deleted, file
+permission bits, first sector number, date (utc stamp), and filename.  The
+filename length is set from the CONFIG_SMARTFS_NAMLEN config value at the
+time the mksmartfs command is executed.  Changes to the
+CONFIG_SMARTFS_NAMLEN parameter will not be reflected on the volume
+unless it is reformatted.  The same is true of the sector size parameter.
+
+Subdirectories are supported by creating a new sector chain (of type
+directory) and creating a standard directory entry for it in it's parent
+directory.  Then files and additional sub-directories can be added to
+that directory chain.  As such, each directory on the volume will occupy
+a minimum of one sector on the device.  Subdirectories can be deleted
+only if they are "empty" (i.e they reference no active entries).  There
+are no provision made for performing a recursive directory delete.
+
+New files and subdirectories can be added to a directory without needing
+to copy and release the original directory sector.  This is done by
+writing only the new entry data to the sector and ignoring the "bytes
+used" field of the chain header for directories.  Updates (modifying
+existing data) or appending to a sector for regular files requires copying
+the file data to a new sector and releasing the old one.
+
+SMARTFS organization
+====================
+
+The following example assumes 2 logical blocks per FLASH erase block.  The
+actual relationship is determined by the FLASH geometry reported by the MTD
+driver::
+
+  ERASE LOGICAL                   Sectors begin with a sector header.  Sectors 
may
+  BLOCK SECTOR      CONTENTS      be marked as "released," pending garbage 
collection
+    n   2*n     --+---------------+
+       Sector Hdr |LLLLLLLLLLLLLLL| Logical sector number (2 bytes)
+                  |QQQQQQQQQQQQQQQ| Sequence number (2 bytes)
+                  |SSSSSSSSSSSSSSS| Status bits (1 byte)
+                  +---------------+
+           FS Hdr |TTTTTTTTTTTTTTT| Sector Type (dir or file) (1 byte)
+                  |NNNNNNNNNNNNNNN| Number of next logical sector in chain
+                  |UUUUUUUUUUUUUUU| Number of bytes used in this sector
+                  |               |
+                  |               |
+                  | (Sector Data) |
+                  |               |
+                  |               |
+        2*n+1   --+---------------+
+       Sector Hdr |LLLLLLLLLLLLLLL| Logical sector number (2 bytes)
+                  |QQQQQQQQQQQQQQQ| Sequence number (2 bytes)
+                  |SSSSSSSSSSSSSSS| Status bits (1 byte)
+                  +---------------+
+           FS Hdr |TTTTTTTTTTTTTTT| Sector Type (dir or file) (1 byte)
+                  |NNNNNNNNNNNNNNN| Number of next logical sector in chain
+                  |UUUUUUUUUUUUUUU| Number of bytes used in this sector
+                  |               |
+                  |               |
+                  | (Sector Data) |
+                  |               |
+                  |               |
+   n+1  2*(n+1) --+---------------+
+       Sector Hdr |LLLLLLLLLLLLLLL| Logical sector number (2 bytes)
+                  |QQQQQQQQQQQQQQQ| Sequence number (2 bytes)
+                  |SSSSSSSSSSSSSSS| Status bits (1 byte)
+                  +---------------+
+           FS Hdr |TTTTTTTTTTTTTTT| Sector Type (dir or file) (1 byte)
+                  |NNNNNNNNNNNNNNN| Number of next logical sector in chain
+                  |UUUUUUUUUUUUUUU| Number of bytes used in this sector
+                  |               |
+                  |               |
+                  | (Sector Data) |
+                  |               |
+                  |               |
+                --+---------------+
+
+Headers
+=======
+``SECTOR HEADER``
+    Each sector contains a header (currently 5 bytes) for identifying the
+    status of the sector.  The header contains the sector's logical sector
+    number mapping, an incrementing sequence number to manage changes to
+    logical sector data, and sector flags (committed, released, version, etc.).
+    At the block level, there is no notion of sector chaining, only
+    allocated sectors within erase blocks.
+
+``FORMAT HEADER``
+    Contains information regarding the format on the volume, including
+    a format signature, formatted block size, name length within the directory
+    chains, etc.
+
+``CHAIN HEADER``
+    The file system header (next 5 bytes) tracks file and directory sector
+    chains and actual sector usage (number of bytes that are valid in the
+    sector).  Also indicates the type of chain (file or directory).
+
+Multiple Mount Points
+=====================
+
+Typically, a volume contains a single root directory entry (logical sector
+number 1) and all files and subdirectories are "children" of that root
+directory.  This is a traditional scheme and allows the volume to
+be mounted in a single location within the VFS.  As a configuration
+option, when the volume is formatted via the mksmartfs command, multiple
+root directory entries can be created instead.  The number of entries to
+be created is an added parameter to the mksmartfs command in this
+configuration.
+
+When this option has been enabled in the configuration and specified
+during the format, then the volume will have multiple root directories
+and can support a mount point in the VFS for each.  In this mode,
+the device entries reported in the /dev directory will have a directory
+number postfixed to the name, such as::
+
+    /dev/smart0d1
+    /dev/smart0d2
+    /dev/smart1p1d1
+    /dev/smart1p2d2
+    etc.
+
+Each device entry can then be mounted at different locations, such as::
+
+    /dev/smart0d1 --> /usr
+    /dev/smart0d2 --> /home
+    etc.
+
+Using multiple mount points is slightly different from using partitions
+on the volume in that each mount point has the potential to use the
+entire space on the volume vs. having a pre-allocated reservation of
+space defined by the partition sizes.  Also, all files and directories
+of all mount-points will be physically "mixed in" with data from the
+other mount-points (though files from one will never logically "appear"
+in the others).  Each directory structure is isolated from the others,
+they simply share the same physical media for storage.
+
+SMARTFS Limitations
+===================
+
+This implementation has several limitations that you should be aware
+before opting to use SMARTFS:
+
+1. There is currently no FLASH bad-block management code.  The reason for
+   this is that the FS was geared for Serial NOR FLASH parts.  To use
+   SMARTFS with a NAND FLASH, bad block management would need to be added,
+   along with a few minor changes to eliminate single bit writes to release
+   a sector, etc.
+
+2. The implementation can support CRC-8 or CRC-16 error detection, and can
+   relocate a failed write operation to a new sector.  However with no bad
+   block management implementation, the code will continue it attempts at
+   using failing block / sector, reducing efficiency and possibly successfully
+   saving data in a block with questionable integrity.
+
+3. The released-sector garbage collection process occurs only during a write
+   when there are no free FLASH sectors.  Thus, occasionally, file writing
+   may take a long time.  This typically isn't noticeable unless the volume
+   is very full and multiple copy / erase cycles must be performed to
+   complete the garbage collection.
+
+4. The total number of logical sectors on the device must be 65534 or less.
+   The number of logical sectors is based on the total device / partition
+   size and the selected sector size.  For larger flash parts, a larger
+   sector size would need to be used to meet this requirement. Creating a
+   geometry which results in 65536 sectors (a 32MByte FLASH with 512 byte
+   logical sector, for example) will cause the code to automatically reduce
+   the total sector count to 65534, thus "wasting" the last two logical
+   sectors on the device (they will never be used).
+
+   This restriction exists because:
+
+   a. The logical sector number is a 16-bit field (i.e. 65535 is the max).
+   b. Logical sector number 65535 (0xFFFF) is reserved as this is typically
+      the "erased state" of the FLASH.
+
+ioctls
+======
+
+``BIOC_LLFORMAT``
+    Performs a SMART low-level format on the volume.  This erases the volume
+    and writes the FORMAT HEADER to the first physical sector on the volume.
+
+``BIOC_GETFORMAT``
+    Returns information about the format found on the volume during the
+    "scan" operation which is performed when the volume is mounted.
+
+``BIOC_ALLOCSECT``
+    Allocates a logical sector on the device.
+
+``BIOC_FREESECT``
+    Frees a logical sector that had been previously allocated.  This
+    causes the sector to be marked as "released" and possibly causes the
+    erase block to be erased if it is the last active sector in the
+    it's erase block.
+
+``BIOC_READSECT``
+    Reads data from a logical sector.  This uses a structure to identify
+    the offset and count of data to be read.
+
+``BIOC_WRITESECT``
+    Writes data to a logical sector.  This uses a structure to identify
+    the offset and count of data to be written.  May cause a logical
+    sector to be physically relocated and may cause garbage collection
+    if needed when moving data to a new physical sector.
+
+Things to Do
+============
+
+- Add file permission checking to open / read / write routines.
+- Add reporting of actual FLASH usage for directories (each directory
+  occupies one or more physical sectors, yet the size is reported as
+  zero for directories).
diff --git a/Documentation/components/filesystem/spiffs.rst 
b/Documentation/components/filesystem/spiffs.rst
new file mode 100644
index 0000000000..7cb280eca8
--- /dev/null
+++ b/Documentation/components/filesystem/spiffs.rst
@@ -0,0 +1,31 @@
+======
+SPIFFS
+======
+
+Creating an image
+=================
+
+This implementation is supposed to be compatible with
+images generated by the following tools:
+
+* `mkspiffs <https://github.com/igrr/mkspiffs>`_
+* ESP-IDF `spiffsgen.py 
<https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/storage/spiffs.html#spiffsgen-py>`_
+
+Note: please ensure the following NuttX configs to be compatible with
+these tools:
+
+* ``CONFIG_SPIFFS_COMPAT_OLD_NUTTX`` is disabled
+* ``CONFIG_SPIFFS_LEADING_SLASH=y``
+
+mkspiffs
+--------
+
+* Specify ``CONFIG_SPIFFS_NAME_MAX + 1`` for ``SPIFFS_OBJ_NAME_LEN``.
+* Specify 0 for ``SPIFFS_OBJ_META_LEN``.
+
+ESP-IDF ``spiffsgen.py``
+------------------------
+
+* Specify ``CONFIG_SPIFFS_NAME_MAX + 1`` for the ``--obj-name-len`` option.
+* Specify 0 for the ``--meta-len`` option.
+
diff --git a/Documentation/components/filesystem/unionfs.rst 
b/Documentation/components/filesystem/unionfs.rst
new file mode 100644
index 0000000000..076e407b09
--- /dev/null
+++ b/Documentation/components/filesystem/unionfs.rst
@@ -0,0 +1,96 @@
+==============
+``fs/unionfs``
+==============
+
+Overview
+========
+
+This directory contains the NuttX Union File System.  The Union file
+system is provides a mechanism to overlay two different, mounted file
+systems so that they appear as one.  In general this works like this:
+
+  1) Mount file system 1 at some location, say /mnt/file1
+  2) Mount file system 2 at some location, say /mnt/file2
+  3) Call mount() to combine and overly /mnt/file1 and mnt/file2
+     as a new mount point, say /mnt/unionfs.
+
+/mnt/file1 and /mnt/file2 will disappear and be replaced by the single
+mountpoint /mnut/unionfs.  The previous contents under /mnt/file1 and
+/mnt/file2 will appear merged under /mnt/unionfs. Files at the same
+relative path in file system1 will take presence. If another file of the
+same name and same relative location exists    in file system 2, it will
+not be visible because it will be occluded by the file in file system1.
+
+See include/nutts/unionfs.h for additional information.
+
+The Union File System is enabled by selecting the CONFIG_FS_UNIONFS option
+in the NuttX configuration file.
+
+Disclaimer:  This Union File System was certainly inspired by UnionFS
+(http://en.wikipedia.org/wiki/UnionFS) and the similarity in naming is
+unavoidable.  However, other than that, the NuttX Union File System
+has no relationship with the UnioinFS project in specification, usage,
+design, or implementation.
+
+Uses of the Union File System
+==============================
+
+The original motivation for this file was for the use of the built-in
+function file system (BINFS) with a web server.  In that case, the built
+in functions provide CGI programs.  But the BINFS file system cannot hold
+content.  Fixed content would need to be retained in a more standard file
+system such as ROMFS.  With this Union File System, you can overly the
+BINFS mountpoint on the ROMFS mountpoint, providing a single directory
+that appears to contain the executables from the BINFS file system along
+with the web content from the ROMFS file system.
+
+Another possible use for the Union File System could be to augment or
+replace files in a FLASH file system.  For example, suppose that you have
+a product that ships with content in a ROMFS file system provided by the
+on-board FLASH.  Later, you overlay that ROMFS file system with additional
+files from an SD card by using the Union File System to overlay, and
+perhaps replace, the ROMFS files.
+
+Another use case might be to overlay a read-only file system like ROMFS
+with a writable file system (like a RAM disk).  This should then give
+to a readable/write-able file system with some fixed content.
+
+Prefixes
+========
+
+And optional prefix may be provided with each of the file systems
+combined in by the Union File System.  For example, suppose that:
+
+* File system 1 is a ROMFS file system with prefix == NULL,
+* File system 2 is a BINFS file system with prefix == "cgin-bin", and
+* The union file system is mounted at /mnt/www.
+
+Then the content in the in the ROMFS file system would appear at
+/mnt/www and the content of the BINFS file system would appear at
+/mnt/www/cgi-gin.
+
+Example Configurations
+======================
+
+* ``boards/sim/sim/sim/unionfs`` - This is a simulator configuration that
+  uses the Union File System test at apps/examples/unionfs.  That test
+  overlays two small ROMFS file systems with many conflicts in
+  directories and file names.  This is a good platform for testing the
+  Union file System and apps/examples/unionfs is a good example of how to
+  configure the Union File System.
+
+* ``boards/arm/lpc17xx_40xx/lincoln60/thttpd-binfs`` - This is an example
+  using the THTTPD web server.  It server up content from a Union File
+  System with fixed content provided by a ROMFS file system and CGI
+  content provided by a BINFS file system.
+
+  You can see how the Union File System content directory is configured
+  by logic in apps/example/thttpd/.
+
+* ``boards/arm/lpc17xx_40xx/olimex-lpc1766stk/thttpd-binfs`` - This is
+  essentially the same as the lincoln60 configuration.  It does not work,
+  however, because the LPC1766 has insufficient RAM to support the THTTPD
+  application in this configuration.
+
+See the README.txt file in each of these board directories for additional
+information about these configurations.
diff --git a/Documentation/components/filesystem/zipfs.rst 
b/Documentation/components/filesystem/zipfs.rst
new file mode 100644
index 0000000000..0b511819cb
--- /dev/null
+++ b/Documentation/components/filesystem/zipfs.rst
@@ -0,0 +1,41 @@
+=====
+ZipFS
+=====
+
+Zipfs is a read only file system that mounts a zip file as a NuttX file system 
through the NuttX VFS interface.
+This allows users to read files while decompressing them, without requiring 
additional storage space.
+
+CONFIG
+======
+
+.. code-block:: bash
+
+    CONFIG_FS_ZIPFS=y
+    CONFIG_LIB_ZLIB=y
+
+Example
+=======
+
+1. ``./tools/configure.sh sim:zipfs`` build sim platform with zipfs support.
+
+2. ``make`` build NuttX.
+
+3. ``./nuttx`` run NuttX.
+
+4. ``nsh> mount -t hostfs -o /home/<your host name>/work /host`` mount host 
directory to ``/host``.
+
+5. ``nsh> mount -t zipfs -o /host/test.zip /zip`` mount zip file to ``/zipfs``.
+
+6. Use cat/ls command to test.
+
+.. code-block:: bash
+
+    nsh> ls /zip
+    /zip:
+     a/1
+     a/2
+    nsh> cat /zip/a/1
+    this is zipfs test 1
+    nsh> cat /zip/a/2
+    this is zipfs test 2
+
diff --git a/Documentation/components/index.rst 
b/Documentation/components/index.rst
index 3b26862bc2..205bf364a2 100644
--- a/Documentation/components/index.rst
+++ b/Documentation/components/index.rst
@@ -10,9 +10,9 @@ NuttX is very feature-rich RTOS and is thus composed of 
various different subsys
    power.rst
    binfmt.rst
    drivers/index.rst
-   filesystem.rst
    nxflat.rst
    nxgraphics/index.rst
    nxwidgets.rst
    paging.rst
    audio/index.rst
+   filesystem/index.rst

Reply via email to