On 2019-04-23 18:53, Tom Lane wrote:
> Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes:
>> On 2019-04-23 16:15, Joe Conway wrote:
>>> I don't think so. Not sure if you have an account at Red Hat, but this
>>> ticket covers it:
>>> https://access.redhat.com/solutions/48199
> 
>> That discusses the equally-named export options on the NFS server, not
>> the mount options on the NFS client.
> 
> Well, the DBA might also be the NFS server's admin, so I think we ought
> to explain the correct settings on both ends.

Right, the slight confusion in this thread indicates that this is worth
explaining further.

New version attached.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 5891e07a42017ee2b80e25c05b7b89bf7e5fe605 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <pe...@eisentraut.org>
Date: Wed, 24 Apr 2019 10:53:40 +0200
Subject: [PATCH v2] doc: Update section on NFS

---
 doc/src/sgml/runtime.sgml | 94 ++++++++++++++++++++++++++-------------
 1 file changed, 63 insertions(+), 31 deletions(-)

diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 388dc7e966..4ec80ccc0b 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -229,42 +229,74 @@ <title>Use of Secondary File Systems</title>
 
   </sect2>
 
-  <sect2 id="creating-cluster-nfs">
-   <title>Use of Network File Systems</title>
-
-   <indexterm zone="creating-cluster-nfs">
-    <primary>Network File Systems</primary>
-   </indexterm>
-   <indexterm><primary><acronym>NFS</acronym></primary><see>Network File 
Systems</see></indexterm>
-   <indexterm><primary>Network Attached Storage 
(<acronym>NAS</acronym>)</primary><see>Network File Systems</see></indexterm>
+  <sect2 id="creating-cluster-filesystem">
+   <title>File Systems</title>
 
    <para>
-    Many installations create their database clusters on network file
-    systems.  Sometimes this is done via <acronym>NFS</acronym>, or by using a
-    Network Attached Storage (<acronym>NAS</acronym>) device that uses
-    <acronym>NFS</acronym> internally.  <productname>PostgreSQL</productname> 
does nothing
-    special for <acronym>NFS</acronym> file systems, meaning it assumes
-    <acronym>NFS</acronym> behaves exactly like locally-connected drives.
-    If the client or server <acronym>NFS</acronym> implementation does not
-    provide standard file system semantics, this can
-    cause reliability problems (see <ulink
-    
url="https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html";></ulink>).
-    Specifically, delayed (asynchronous) writes to the <acronym>NFS</acronym>
-    server can cause data corruption problems.  If possible, mount the
-    <acronym>NFS</acronym> file system synchronously (without caching) to avoid
-    this hazard.  Also, soft-mounting the <acronym>NFS</acronym> file system is
-    not recommended.
+    Generally, any file system with POSIX semantics can be used for
+    PostgreSQL.  Users prefer different file systems for a variety of reasons,
+    including vendor support, performance, and familiarity.  Experience
+    suggests that, all other things being equal, one should not expect major
+    performance or behavior changes merely from switching file systems or
+    making minor file system configuration changes.
    </para>
 
-   <para>
-    Storage Area Networks (<acronym>SAN</acronym>) typically use communication
-    protocols other than <acronym>NFS</acronym>, and may or may not be subject
-    to hazards of this sort.  It's advisable to consult the vendor's
-    documentation concerning data consistency guarantees.
-    <productname>PostgreSQL</productname> cannot be more reliable than
-    the file system it's using.
-   </para>
+   <sect3 id="creating-cluster-nfs">
+    <title>NFS</title>
+
+    <indexterm zone="creating-cluster-nfs">
+     <primary>NFS</primary>
+    </indexterm>
+
+    <para>
+     It is possible to use an <acronym>NFS</acronym> file system for storing
+     the <productname>PostgreSQL</productname> data directory.
+     <productname>PostgreSQL</productname> does nothing special for
+     <acronym>NFS</acronym> file systems, meaning it assumes
+     <acronym>NFS</acronym> behaves exactly like locally-connected drives.
+     <productname>PostgreSQL</productname> does not use any functionality that
+     is known to have nonstandard behavior on <acronym>NFS</acronym>, such as
+     file locking.
+    </para>
 
+    <para>
+     The only firm requirement for using <acronym>NFS</acronym> with
+     <productname>PostgreSQL</productname> is that the file system is mounted
+     using the <literal>hard</literal> option.  With the
+     <literal>hard</literal> option, processes can <quote>hang</quote>
+     indefinitely if there are network problems, so this configuration will
+     require a careful monitoring setup.  The <literal>soft</literal> option
+     will interrupt system calls in case of network problems, but
+     <productname>PostgreSQL</productname> will not repeat system calls
+     interrupted in this way, so any such interruption will result in an I/O
+     error being reported.
+    </para>
+
+    <para>
+     It is not necessary to use the <literal>sync</literal> mount option.  The
+     behavior of the <literal>async</literal> option is sufficient, since
+     <productname>PostgreSQL</productname> issues <literal>fsync</literal>
+     calls at appropriate times to flush the write caches.  (This is analogous
+     to how it works on a local file system.)  However, it is strongly
+     recommended to use the <literal>sync</literal> export option on the NFS
+     <emphasis>server</emphasis>.  Otherwise an <literal>fsync</literal> or
+     equivalent on the NFS client is not actually guaranteed to reach
+     permanent storage on the server, which could cause corruption similar to
+     running with the parameter <xref linkend="guc-fsync"/> off.  The defaults
+     of these mount and export options differs between vendors and versions,
+     so it is recommended to check and perhaps specify them explicitly in any
+     case to avoid any ambiguity.
+    </para>
+
+    <para>
+     In some cases, an external storage product can be accessed either via NFS
+     or a lower-level protocol such as iSCSI.  In the latter case, the storage
+     appears as a block device and any available file system can be created on
+     it.  That approach might relieve the DBA from having to deal with some of
+     the idiosyncrasies of NFS, but of course the complexity of managing
+     remote storage then happens at other levels.
+    </para>
+   </sect3>
   </sect2>
 
  </sect1>
-- 
2.21.0

Reply via email to