On Mon, Aug  1, 2016 at 01:35:53PM +0200, Christoph Berg wrote:
> Re: Bruce Momjian 2016-07-30 <20160730181643.gd22...@momjian.us>
> > I also just applied a doc patch that increases case and spacing
> > consistency in the use of kB/MB/GB/TB.
> 
> Hi,
> 
> PostgreSQL uses the spaces inconsistently, though. pg_size_pretty uses spaces:
> 
> # select pg_size_pretty((2^20)::bigint);
>  pg_size_pretty
> ────────────────
>  1024 kB
> 
> SHOW does not:
> 
> # show work_mem;
>  work_mem
> ──────────
>  1MB

Yes, that is inconsistent.  I have updated my attached patch to remove
spaces between the number and the units --- see below.

> The SHOW output is formatted by _ShowOption() using 'INT64_FORMAT "%s"',
> via convert_from_base_unit(). The latter has a comment attached...
> /*
>  * Convert a value in some base unit to a human-friendly unit.  The output
>  * unit is chosen so that it's the greatest unit that can represent the value
>  * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
>  * converted to 1 MB, but 1025 is represented as 1025 kB.
>  */
> ... where the spaces are present again.
> 
> General typesetting standard seems to be "1 MB", i.e. to include a
> space between value and unit. (This would also be my preference.)
> 
> Opinions? (I'd opt to insert spaces in the docs now, and then see if
> inserting a space in the SHOW output is acceptable for 10.0.)

I went through the docs a few days ago and committed a change to removed
spaces between the number and units in the few cases that had them ---
the majority didn't have spaces.

Looking at the Wikipedia article I posted earlier, that also doesn't use
spaces:

        https://en.wikipedia.org/wiki/Binary_prefix

I think the only argument _for_ spaces is the output of pg_size_pretty()
now looks odd, e.g.:

               10 | 10 bytes       | -10 bytes
             1000 | 1000 bytes     | -1000 bytes
          1000000 | 977KB          | -977KB
       1000000000 | 954MB          | -954MB
    1000000000000 | 931GB          | -931GB
 1000000000000000 | 909TB          | -909TB
                    ^^^^^             ^^^^^

The issue is that we output "10 bytes", not "10bytes", but for units we
use "977KB".  That seems inconsistent, but it is the normal policy
people use.  I think this is because "977KB" is really "977K bytes", but
we just append the "B" after the "K" for bevity.

-- 
  Bruce Momjian  <br...@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +
diff --git a/configure b/configure
new file mode 100755
index b49cc11..8466e5a
*** a/configure
--- b/configure
*************** Optional Packages:
*** 1502,1511 ****
    --with-libs=DIRS        alternative spelling of --with-libraries
    --with-pgport=PORTNUM   set default port number [5432]
    --with-blocksize=BLOCKSIZE
!                           set table block size in kB [8]
    --with-segsize=SEGSIZE  set table segment size in GB [1]
    --with-wal-blocksize=BLOCKSIZE
!                           set WAL block size in kB [8]
    --with-wal-segsize=SEGSIZE
                            set WAL segment size in MB [16]
    --with-CC=CMD           set compiler (deprecated)
--- 1502,1511 ----
    --with-libs=DIRS        alternative spelling of --with-libraries
    --with-pgport=PORTNUM   set default port number [5432]
    --with-blocksize=BLOCKSIZE
!                           set table block size in KB [8]
    --with-segsize=SEGSIZE  set table segment size in GB [1]
    --with-wal-blocksize=BLOCKSIZE
!                           set WAL block size in KB [8]
    --with-wal-segsize=SEGSIZE
                            set WAL segment size in MB [16]
    --with-CC=CMD           set compiler (deprecated)
*************** case ${blocksize} in
*** 3550,3557 ****
   32) BLCKSZ=32768;;
    *) as_fn_error $? "Invalid block size. Allowed values are 1,2,4,8,16,32." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${blocksize}kB" >&5
! $as_echo "${blocksize}kB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
--- 3550,3557 ----
   32) BLCKSZ=32768;;
    *) as_fn_error $? "Invalid block size. Allowed values are 1,2,4,8,16,32." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${blocksize}KB" >&5
! $as_echo "${blocksize}KB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
*************** case ${wal_blocksize} in
*** 3638,3645 ****
   64) XLOG_BLCKSZ=65536;;
    *) as_fn_error $? "Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_blocksize}kB" >&5
! $as_echo "${wal_blocksize}kB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
--- 3638,3645 ----
   64) XLOG_BLCKSZ=65536;;
    *) as_fn_error $? "Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_blocksize}KB" >&5
! $as_echo "${wal_blocksize}KB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
diff --git a/configure.in b/configure.in
new file mode 100644
index 5da4f74..2ed6298
*** a/configure.in
--- b/configure.in
*************** AC_SUBST(enable_tap_tests)
*** 250,256 ****
  # Block size
  #
  AC_MSG_CHECKING([for block size])
! PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in kB [8]],
               [blocksize=$withval],
               [blocksize=8])
  case ${blocksize} in
--- 250,256 ----
  # Block size
  #
  AC_MSG_CHECKING([for block size])
! PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in KB [8]],
               [blocksize=$withval],
               [blocksize=8])
  case ${blocksize} in
*************** case ${blocksize} in
*** 262,268 ****
   32) BLCKSZ=32768;;
    *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
  esac
! AC_MSG_RESULT([${blocksize}kB])
  
  AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
   Size of a disk block --- this also limits the size of a tuple.  You
--- 262,268 ----
   32) BLCKSZ=32768;;
    *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
  esac
! AC_MSG_RESULT([${blocksize}KB])
  
  AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
   Size of a disk block --- this also limits the size of a tuple.  You
*************** AC_DEFINE_UNQUOTED([RELSEG_SIZE], ${RELS
*** 314,320 ****
  # WAL block size
  #
  AC_MSG_CHECKING([for WAL block size])
! PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in kB [8]],
               [wal_blocksize=$withval],
               [wal_blocksize=8])
  case ${wal_blocksize} in
--- 314,320 ----
  # WAL block size
  #
  AC_MSG_CHECKING([for WAL block size])
! PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in KB [8]],
               [wal_blocksize=$withval],
               [wal_blocksize=8])
  case ${wal_blocksize} in
*************** case ${wal_blocksize} in
*** 327,333 ****
   64) XLOG_BLCKSZ=65536;;
    *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
  esac
! AC_MSG_RESULT([${wal_blocksize}kB])
  
  AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
   Size of a WAL file block.  This need have no particular relation to BLCKSZ.
--- 327,333 ----
   64) XLOG_BLCKSZ=65536;;
    *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
  esac
! AC_MSG_RESULT([${wal_blocksize}KB])
  
  AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
   Size of a WAL file block.  This need have no particular relation to BLCKSZ.
diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml
new file mode 100644
index 38e6f50..34d87b3
*** a/doc/src/sgml/auto-explain.sgml
--- b/doc/src/sgml/auto-explain.sgml
*************** LOG:  duration: 3.651 ms  plan:
*** 263,269 ****
            Hash Cond: (pg_class.oid = pg_index.indrelid)
            ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
            ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
!                 Buckets: 1024  Batches: 1  Memory Usage: 4kB
                  ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                        Filter: indisunique
  ]]></screen>
--- 263,269 ----
            Hash Cond: (pg_class.oid = pg_index.indrelid)
            ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
            ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
!                 Buckets: 1024  Batches: 1  Memory Usage: 4KB
                  ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                        Filter: indisunique
  ]]></screen>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
new file mode 100644
index cbb333f..a0678b2
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***************
*** 4021,4027 ****
     segments or <quote>pages</> small enough to be conveniently stored as rows
     in <structname>pg_largeobject</structname>.
     The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently
!    <literal>BLCKSZ/4</>, or typically 2kB).
    </para>
  
    <para>
--- 4021,4027 ----
     segments or <quote>pages</> small enough to be conveniently stored as rows
     in <structname>pg_largeobject</structname>.
     The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently
!    <literal>BLCKSZ/4</>, or typically 2KB).
    </para>
  
    <para>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
new file mode 100644
index b9581d9..23d666a
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***************
*** 81,87 ****
         <itemizedlist>
          <listitem>
           <para>
!           Valid memory units are <literal>kB</literal> (kilobytes),
            <literal>MB</literal> (megabytes), <literal>GB</literal>
            (gigabytes), and <literal>TB</literal> (terabytes).
            The multiplier for memory units is 1024, not 1000.
--- 81,87 ----
         <itemizedlist>
          <listitem>
           <para>
!           Valid memory units are <literal>KB</literal> (kilobytes),
            <literal>MB</literal> (megabytes), <literal>GB</literal>
            (gigabytes), and <literal>TB</literal> (terabytes).
            The multiplier for memory units is 1024, not 1000.
*************** include_dir 'conf.d'
*** 1903,1909 ****
           cache, where performance might degrade.  This setting may have no
           effect on some platforms.  The valid range is between
           <literal>0</literal>, which disables controlled writeback, and
!          <literal>2MB</literal>.  The default is <literal>512kB</> on Linux,
           <literal>0</> elsewhere.  (Non-default values of
           <symbol>BLCKSZ</symbol> change the default and maximum.)
           This parameter can only be set in the <filename>postgresql.conf</>
--- 1903,1909 ----
           cache, where performance might degrade.  This setting may have no
           effect on some platforms.  The valid range is between
           <literal>0</literal>, which disables controlled writeback, and
!          <literal>2MB</literal>.  The default is <literal>512KB</> on Linux,
           <literal>0</> elsewhere.  (Non-default values of
           <symbol>BLCKSZ</symbol> change the default and maximum.)
           This parameter can only be set in the <filename>postgresql.conf</>
*************** include_dir 'conf.d'
*** 2481,2491 ****
          The amount of shared memory used for WAL data that has not yet been
          written to disk.  The default setting of -1 selects a size equal to
          1/32nd (about 3%) of <xref linkend="guc-shared-buffers">, but not less
!         than <literal>64kB</literal> nor more than the size of one WAL
          segment, typically <literal>16MB</literal>.  This value can be set
          manually if the automatic choice is too large or too small,
!         but any positive value less than <literal>32kB</literal> will be
!         treated as <literal>32kB</literal>.
          This parameter can only be set at server start.
         </para>
  
--- 2481,2491 ----
          The amount of shared memory used for WAL data that has not yet been
          written to disk.  The default setting of -1 selects a size equal to
          1/32nd (about 3%) of <xref linkend="guc-shared-buffers">, but not less
!         than <literal>64KB</literal> nor more than the size of one WAL
          segment, typically <literal>16MB</literal>.  This value can be set
          manually if the automatic choice is too large or too small,
!         but any positive value less than <literal>32KB</literal> will be
!         treated as <literal>32KB</literal>.
          This parameter can only be set at server start.
         </para>
  
*************** include_dir 'conf.d'
*** 2660,2666 ****
          than the OS's page cache, where performance might degrade.  This
          setting may have no effect on some platforms.  The valid range is
          between <literal>0</literal>, which disables controlled writeback,
!         and <literal>2MB</literal>.  The default is <literal>256kB</> on
          Linux, <literal>0</> elsewhere.  (Non-default values of
          <symbol>BLCKSZ</symbol> change the default and maximum.)
          This parameter can only be set in the <filename>postgresql.conf</>
--- 2660,2666 ----
          than the OS's page cache, where performance might degrade.  This
          setting may have no effect on some platforms.  The valid range is
          between <literal>0</literal>, which disables controlled writeback,
!         and <literal>2MB</literal>.  The default is <literal>256KB</> on
          Linux, <literal>0</> elsewhere.  (Non-default values of
          <symbol>BLCKSZ</symbol> change the default and maximum.)
          This parameter can only be set in the <filename>postgresql.conf</>
diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
new file mode 100644
index a30e25c..b917bdd
*** a/doc/src/sgml/ecpg.sgml
--- b/doc/src/sgml/ecpg.sgml
*************** if (*(int2 *)sqldata->sqlvar[i].sqlind !
*** 8165,8171 ****
       <term><literal>sqlilongdata</></term>
        <listitem>
         <para>
!         It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32kB.
         </para>
        </listitem>
       </varlistentry>
--- 8165,8171 ----
       <term><literal>sqlilongdata</></term>
        <listitem>
         <para>
!         It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32KB.
         </para>
        </listitem>
       </varlistentry>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 971e642..64f347e
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*************** postgres=# SELECT * FROM pg_xlogfile_nam
*** 18788,18809 ****
  
     <para>
      <function>pg_size_pretty</> can be used to format the result of one of
!     the other functions in a human-readable way, using bytes, kB, MB, GB or TB
      as appropriate.
     </para>
  
     <para>
      <function>pg_size_bytes</> can be used to get the size in bytes from a
!     string in human-readable format. The input may have units of bytes, kB,
      MB, GB or TB, and is parsed case-insensitively. If no units are specified,
      bytes are assumed.
     </para>
  
     <note>
      <para>
!      The units kB, MB, GB and TB used by the functions
       <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined
!      using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is
       1024<superscript>2</> = 1048576 bytes, and so on.
      </para>
     </note>
--- 18788,18809 ----
  
     <para>
      <function>pg_size_pretty</> can be used to format the result of one of
!     the other functions in a human-readable way, using bytes, KB, MB, GB or TB
      as appropriate.
     </para>
  
     <para>
      <function>pg_size_bytes</> can be used to get the size in bytes from a
!     string in human-readable format. The input may have units of bytes, KB,
      MB, GB or TB, and is parsed case-insensitively. If no units are specified,
      bytes are assumed.
     </para>
  
     <note>
      <para>
!      The units KB, MB, GB and TB used by the functions
       <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined
!      using powers of 2 rather than powers of 10, so 1KB is 1024 bytes, 1MB is
       1024<superscript>2</> = 1048576 bytes, and so on.
      </para>
     </note>
diff --git a/doc/src/sgml/ltree.sgml b/doc/src/sgml/ltree.sgml
new file mode 100644
index fccfd32..29be58b
*** a/doc/src/sgml/ltree.sgml
--- b/doc/src/sgml/ltree.sgml
***************
*** 31,37 ****
     A <firstterm>label path</firstterm> is a sequence of zero or more
     labels separated by dots, for example <literal>L1.L2.L3</>, representing
     a path from the root of a hierarchical tree to a particular node.  The
!    length of a label path must be less than 65kB, but keeping it under 2kB is
     preferable.  In practice this is not a major limitation; for example,
     the longest label path in the DMOZ catalog (<ulink
     url="http://www.dmoz.org";></ulink>) is about 240 bytes.
--- 31,37 ----
     A <firstterm>label path</firstterm> is a sequence of zero or more
     labels separated by dots, for example <literal>L1.L2.L3</>, representing
     a path from the root of a hierarchical tree to a particular node.  The
!    length of a label path must be less than 65KB, but keeping it under 2KB is
     preferable.  In practice this is not a major limitation; for example,
     the longest label path in the DMOZ catalog (<ulink
     url="http://www.dmoz.org";></ulink>) is about 240 bytes.
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
new file mode 100644
index 7bcbfa7..a30b276
*** a/doc/src/sgml/perform.sgml
--- b/doc/src/sgml/perform.sgml
*************** WHERE t1.unique1 &lt; 100 AND t1.unique2
*** 603,614 ****
  --------------------------------------------------------------------------------------------------------------------------------------------
   Sort  (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1)
     Sort Key: t1.fivethous
!    Sort Method: quicksort  Memory: 77kB
     -&gt;  Hash Join  (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1)
           Hash Cond: (t2.unique2 = t1.unique2)
           -&gt;  Seq Scan on tenk2 t2  (cost=0.00..445.00 rows=10000 width=244) (actual time=0.007..2.583 rows=10000 loops=1)
           -&gt;  Hash  (cost=229.20..229.20 rows=101 width=244) (actual time=0.659..0.659 rows=100 loops=1)
!                Buckets: 1024  Batches: 1  Memory Usage: 28kB
                 -&gt;  Bitmap Heap Scan on tenk1 t1  (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)
                       Recheck Cond: (unique1 &lt; 100)
                       -&gt;  Bitmap Index Scan on tenk1_unique1  (cost=0.00..5.04 rows=101 width=0) (actual time=0.049..0.049 rows=100 loops=1)
--- 603,614 ----
  --------------------------------------------------------------------------------------------------------------------------------------------
   Sort  (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1)
     Sort Key: t1.fivethous
!    Sort Method: quicksort  Memory: 77KB
     -&gt;  Hash Join  (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1)
           Hash Cond: (t2.unique2 = t1.unique2)
           -&gt;  Seq Scan on tenk2 t2  (cost=0.00..445.00 rows=10000 width=244) (actual time=0.007..2.583 rows=10000 loops=1)
           -&gt;  Hash  (cost=229.20..229.20 rows=101 width=244) (actual time=0.659..0.659 rows=100 loops=1)
!                Buckets: 1024  Batches: 1  Memory Usage: 28KB
                 -&gt;  Bitmap Heap Scan on tenk1 t1  (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)
                       Recheck Cond: (unique1 &lt; 100)
                       -&gt;  Bitmap Index Scan on tenk1_unique1  (cost=0.00..5.04 rows=101 width=0) (actual time=0.049..0.049 rows=100 loops=1)
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
new file mode 100644
index 8e701aa..3bab944
*** a/doc/src/sgml/protocol.sgml
--- b/doc/src/sgml/protocol.sgml
*************** The commands accepted in walsender mode
*** 1973,1979 ****
            Limit (throttle) the maximum amount of data transferred from server
            to client per unit of time.  The expected unit is kilobytes per second.
            If this option is specified, the value must either be equal to zero
!           or it must fall within the range from 32kB through 1GB (inclusive).
            If zero is passed or the option is not specified, no restriction is
            imposed on the transfer.
           </para>
--- 1973,1979 ----
            Limit (throttle) the maximum amount of data transferred from server
            to client per unit of time.  The expected unit is kilobytes per second.
            If this option is specified, the value must either be equal to zero
!           or it must fall within the range from 32KB through 1GB (inclusive).
            If zero is passed or the option is not specified, no restriction is
            imposed on the transfer.
           </para>
diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml
new file mode 100644
index ca1b767..e7c5c6e
*** a/doc/src/sgml/rules.sgml
--- b/doc/src/sgml/rules.sgml
*************** SELECT word FROM words ORDER BY word <->
*** 1079,1085 ****
   Limit  (cost=11583.61..11583.64 rows=10 width=32) (actual time=1431.591..1431.594 rows=10 loops=1)
     -&gt;  Sort  (cost=11583.61..11804.76 rows=88459 width=32) (actual time=1431.589..1431.591 rows=10 loops=1)
           Sort Key: ((word &lt;-&gt; 'caterpiler'::text))
!          Sort Method: top-N heapsort  Memory: 25kB
           -&gt;  Foreign Scan on words  (cost=0.00..9672.05 rows=88459 width=32) (actual time=0.057..1286.455 rows=479829 loops=1)
                 Foreign File: /usr/share/dict/words
                 Foreign File Size: 4953699
--- 1079,1085 ----
   Limit  (cost=11583.61..11583.64 rows=10 width=32) (actual time=1431.591..1431.594 rows=10 loops=1)
     -&gt;  Sort  (cost=11583.61..11804.76 rows=88459 width=32) (actual time=1431.589..1431.591 rows=10 loops=1)
           Sort Key: ((word &lt;-&gt; 'caterpiler'::text))
!          Sort Method: top-N heapsort  Memory: 25KB
           -&gt;  Foreign Scan on words  (cost=0.00..9672.05 rows=88459 width=32) (actual time=0.057..1286.455 rows=479829 loops=1)
                 Foreign File: /usr/share/dict/words
                 Foreign File Size: 4953699
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
new file mode 100644
index 4c5d748..02644ce
*** a/doc/src/sgml/runtime.sgml
--- b/doc/src/sgml/runtime.sgml
*************** psql: could not connect to server: No su
*** 659,665 ****
        <row>
         <entry><varname>SHMMAX</></>
         <entry>Maximum size of shared memory segment (bytes)</>
!        <entry>at least 1kB (more if running many copies of the server)</entry>
        </row>
  
        <row>
--- 659,665 ----
        <row>
         <entry><varname>SHMMAX</></>
         <entry>Maximum size of shared memory segment (bytes)</>
!        <entry>at least 1KB (more if running many copies of the server)</entry>
        </row>
  
        <row>
*************** kern.sysv.shmall=1024
*** 1032,1038 ****
         </para>
  
         <para>
!         <varname>SHMALL</> is measured in 4kB pages on this platform.
         </para>
  
         <para>
--- 1032,1038 ----
         </para>
  
         <para>
!         <varname>SHMALL</> is measured in 4KB pages on this platform.
         </para>
  
         <para>
*************** sysctl -w kern.sysv.shmall
*** 1075,1081 ****
        </term>
        <listitem>
         <para>
!         In the default configuration, only 512kB of shared memory per
          segment is allowed. To increase the setting, first change to the
          directory <filename>/etc/conf/cf.d</>. To display the current value of
          <varname>SHMMAX</>, run:
--- 1075,1081 ----
        </term>
        <listitem>
         <para>
!         In the default configuration, only 512KB of shared memory per
          segment is allowed. To increase the setting, first change to the
          directory <filename>/etc/conf/cf.d</>. To display the current value of
          <varname>SHMMAX</>, run:
*************** project.max-msg-ids=(priv,4096,deny)
*** 1180,1186 ****
        <listitem>
         <para>
          On <productname>UnixWare</> 7, the maximum size for shared
!         memory segments is 512kB in the default configuration.
          To display the current value of <varname>SHMMAX</>, run:
  <programlisting>
  /etc/conf/bin/idtune -g SHMMAX
--- 1180,1186 ----
        <listitem>
         <para>
          On <productname>UnixWare</> 7, the maximum size for shared
!         memory segments is 512KB in the default configuration.
          To display the current value of <varname>SHMMAX</>, run:
  <programlisting>
  /etc/conf/bin/idtune -g SHMMAX
diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml
new file mode 100644
index f40c790..6a22054
*** a/doc/src/sgml/spgist.sgml
--- b/doc/src/sgml/spgist.sgml
*************** typedef struct spgLeafConsistentOut
*** 755,761 ****
  
    <para>
     Individual leaf tuples and inner tuples must fit on a single index page
!    (8kB by default).  Therefore, when indexing values of variable-length
     data types, long values can only be supported by methods such as radix
     trees, in which each level of the tree includes a prefix that is short
     enough to fit on a page, and the final leaf level includes a suffix also
--- 755,761 ----
  
    <para>
     Individual leaf tuples and inner tuples must fit on a single index page
!    (8KB by default).  Therefore, when indexing values of variable-length
     data types, long values can only be supported by methods such as radix
     trees, in which each level of the tree includes a prefix that is short
     enough to fit on a page, and the final leaf level includes a suffix also
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
new file mode 100644
index 2d82953..aff3dd8
*** a/doc/src/sgml/storage.sgml
--- b/doc/src/sgml/storage.sgml
*************** Oversized-Attribute Storage Technique).
*** 303,309 ****
  
  <para>
  <productname>PostgreSQL</productname> uses a fixed page size (commonly
! 8kB), and does not allow tuples to span multiple pages.  Therefore, it is
  not possible to store very large field values directly.  To overcome
  this limitation, large field values are compressed and/or broken up into
  multiple physical rows.  This happens transparently to the user, with only
--- 303,309 ----
  
  <para>
  <productname>PostgreSQL</productname> uses a fixed page size (commonly
! 8KB), and does not allow tuples to span multiple pages.  Therefore, it is
  not possible to store very large field values directly.  To overcome
  this limitation, large field values are compressed and/or broken up into
  multiple physical rows.  This happens transparently to the user, with only
*************** bytes regardless of the actual size of t
*** 420,429 ****
  <para>
  The <acronym>TOAST</> management code is triggered only
  when a row value to be stored in a table is wider than
! <symbol>TOAST_TUPLE_THRESHOLD</> bytes (normally 2kB).
  The <acronym>TOAST</> code will compress and/or move
  field values out-of-line until the row value is shorter than
! <symbol>TOAST_TUPLE_TARGET</> bytes (also normally 2kB)
  or no more gains can be had.  During an UPDATE
  operation, values of unchanged fields are normally preserved as-is; so an
  UPDATE of a row with out-of-line values incurs no <acronym>TOAST</> costs if
--- 420,429 ----
  <para>
  The <acronym>TOAST</> management code is triggered only
  when a row value to be stored in a table is wider than
! <symbol>TOAST_TUPLE_THRESHOLD</> bytes (normally 2KB).
  The <acronym>TOAST</> code will compress and/or move
  field values out-of-line until the row value is shorter than
! <symbol>TOAST_TUPLE_TARGET</> bytes (also normally 2KB)
  or no more gains can be had.  During an UPDATE
  operation, values of unchanged fields are normally preserved as-is; so an
  UPDATE of a row with out-of-line values incurs no <acronym>TOAST</> costs if
*************** containing typical HTML pages and their
*** 491,497 ****
  raw data size including the <acronym>TOAST</> table, and that the main table
  contained only about 10% of the entire data (the URLs and some small HTML
  pages). There was no run time difference compared to an un-<acronym>TOAST</>ed
! comparison table, in which all the HTML pages were cut down to 7kB to fit.
  </para>
  
  </sect2>
--- 491,497 ----
  raw data size including the <acronym>TOAST</> table, and that the main table
  contained only about 10% of the entire data (the URLs and some small HTML
  pages). There was no run time difference compared to an un-<acronym>TOAST</>ed
! comparison table, in which all the HTML pages were cut down to 7KB to fit.
  </para>
  
  </sect2>
*************** an item is a row; in an index, an item i
*** 703,709 ****
  
  <para>
  Every table and index is stored as an array of <firstterm>pages</> of a
! fixed size (usually 8kB, although a different page size can be selected
  when compiling the server).  In a table, all the pages are logically
  equivalent, so a particular item (row) can be stored in any page.  In
  indexes, the first page is generally reserved as a <firstterm>metapage</>
--- 703,709 ----
  
  <para>
  Every table and index is stored as an array of <firstterm>pages</> of a
! fixed size (usually 8KB, although a different page size can be selected
  when compiling the server).  In a table, all the pages are logically
  equivalent, so a particular item (row) can be stored in any page.  In
  indexes, the first page is generally reserved as a <firstterm>metapage</>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
new file mode 100644
index 2089040..1c2764b
*** a/doc/src/sgml/wal.sgml
--- b/doc/src/sgml/wal.sgml
***************
*** 176,182 ****
     this page imaging by turning off the <xref
     linkend="guc-full-page-writes"> parameter. Battery-Backed Unit
     (BBU) disk controllers do not prevent partial page writes unless
!    they guarantee that data is written to the BBU as full (8kB) pages.
    </para>
    <para>
     <productname>PostgreSQL</> also protects against some kinds of data corruption
--- 176,182 ----
     this page imaging by turning off the <xref
     linkend="guc-full-page-writes"> parameter. Battery-Backed Unit
     (BBU) disk controllers do not prevent partial page writes unless
!    they guarantee that data is written to the BBU as full (8KB) pages.
    </para>
    <para>
     <productname>PostgreSQL</> also protects against some kinds of data corruption
***************
*** 664,670 ****
     linkend="pgtestfsync"> program can be used to measure the average time
     in microseconds that a single WAL flush operation takes.  A value of
     half of the average time the program reports it takes to flush after a
!    single 8kB write operation is often the most effective setting for
     <varname>commit_delay</varname>, so this value is recommended as the
     starting point to use when optimizing for a particular workload.  While
     tuning <varname>commit_delay</varname> is particularly useful when the
--- 664,670 ----
     linkend="pgtestfsync"> program can be used to measure the average time
     in microseconds that a single WAL flush operation takes.  A value of
     half of the average time the program reports it takes to flush after a
!    single 8KB write operation is often the most effective setting for
     <varname>commit_delay</varname>, so this value is recommended as the
     starting point to use when optimizing for a particular workload.  While
     tuning <varname>commit_delay</varname> is particularly useful when the
***************
*** 738,744 ****
     segment files, normally each 16MB in size (but the size can be changed
     by altering the <option>--with-wal-segsize</> configure option when
     building the server).  Each segment is divided into pages, normally
!    8kB each (this size can be changed via the <option>--with-wal-blocksize</>
     configure option).  The log record headers are described in
     <filename>access/xlogrecord.h</filename>; the record content is dependent
     on the type of event that is being logged.  Segment files are given
--- 738,744 ----
     segment files, normally each 16MB in size (but the size can be changed
     by altering the <option>--with-wal-segsize</> configure option when
     building the server).  Each segment is divided into pages, normally
!    8KB each (this size can be changed via the <option>--with-wal-blocksize</>
     configure option).  The log record headers are described in
     <filename>access/xlogrecord.h</filename>; the record content is dependent
     on the type of event that is being logged.  Segment files are given
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
new file mode 100644
index c2e4fa3..027188f
*** a/src/backend/access/transam/multixact.c
--- b/src/backend/access/transam/multixact.c
***************
*** 119,125 ****
   * additional flag bits for each TransactionId.  To do this without getting
   * into alignment issues, we store four bytes of flags, and then the
   * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
!  * are stored as a whole in pages.  Thus, with 8kB BLCKSZ, we keep 409 groups
   * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
   * performance) trumps space efficiency here.
   *
--- 119,125 ----
   * additional flag bits for each TransactionId.  To do this without getting
   * into alignment issues, we store four bytes of flags, and then the
   * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
!  * are stored as a whole in pages.  Thus, with 8KB BLCKSZ, we keep 409 groups
   * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
   * performance) trumps space efficiency here.
   *
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
new file mode 100644
index f13f9c1..be0d277
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
*************** LogCheckpointEnd(bool restartpoint)
*** 8063,8069 ****
  		 "%d transaction log file(s) added, %d removed, %d recycled; "
  		 "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
  		 "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
! 		 "distance=%d kB, estimate=%d kB",
  		 restartpoint ? "restartpoint" : "checkpoint",
  		 CheckpointStats.ckpt_bufs_written,
  		 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
--- 8063,8069 ----
  		 "%d transaction log file(s) added, %d removed, %d recycled; "
  		 "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
  		 "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
! 		 "distance=%d KB, estimate=%d KB",
  		 restartpoint ? "restartpoint" : "checkpoint",
  		 CheckpointStats.ckpt_bufs_written,
  		 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
new file mode 100644
index dbd27e5..943823f
*** a/src/backend/commands/explain.c
--- b/src/backend/commands/explain.c
*************** show_sort_info(SortState *sortstate, Exp
*** 2163,2169 ****
  		if (es->format == EXPLAIN_FORMAT_TEXT)
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
! 			appendStringInfo(es->str, "Sort Method: %s  %s: %ldkB\n",
  							 sortMethod, spaceType, spaceUsed);
  		}
  		else
--- 2163,2169 ----
  		if (es->format == EXPLAIN_FORMAT_TEXT)
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
! 			appendStringInfo(es->str, "Sort Method: %s  %s: %ldKB\n",
  							 sortMethod, spaceType, spaceUsed);
  		}
  		else
*************** show_hash_info(HashState *hashstate, Exp
*** 2205,2211 ****
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 							 "Buckets: %d (originally %d)  Batches: %d (originally %d)  Memory Usage: %ldkB\n",
  							 hashtable->nbuckets,
  							 hashtable->nbuckets_original,
  							 hashtable->nbatch,
--- 2205,2211 ----
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 							 "Buckets: %d (originally %d)  Batches: %d (originally %d)  Memory Usage: %ldKB\n",
  							 hashtable->nbuckets,
  							 hashtable->nbuckets_original,
  							 hashtable->nbatch,
*************** show_hash_info(HashState *hashstate, Exp
*** 2216,2222 ****
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 						   "Buckets: %d  Batches: %d  Memory Usage: %ldkB\n",
  							 hashtable->nbuckets, hashtable->nbatch,
  							 spacePeakKb);
  		}
--- 2216,2222 ----
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 						   "Buckets: %d  Batches: %d  Memory Usage: %ldKB\n",
  							 hashtable->nbuckets, hashtable->nbatch,
  							 spacePeakKb);
  		}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
new file mode 100644
index 7d8fc3e..ca9458b
*** a/src/backend/libpq/auth.c
--- b/src/backend/libpq/auth.c
*************** static int	CheckRADIUSAuth(Port *port);
*** 191,197 ****
   * Attribute Certificate (PAC), which contains the user's Windows permissions
   * (group memberships etc.). The PAC is copied into all tickets obtained on
   * the basis of this TGT (even those issued by Unix realms which the Windows
!  * realm trusts), and can be several kB in size. The maximum token size
   * accepted by Windows systems is determined by the MaxAuthToken Windows
   * registry setting. Microsoft recommends that it is not set higher than
   * 65535 bytes, so that seems like a reasonable limit for us as well.
--- 191,197 ----
   * Attribute Certificate (PAC), which contains the user's Windows permissions
   * (group memberships etc.). The PAC is copied into all tickets obtained on
   * the basis of this TGT (even those issued by Unix realms which the Windows
!  * realm trusts), and can be several KB in size. The maximum token size
   * accepted by Windows systems is determined by the MaxAuthToken Windows
   * registry setting. Microsoft recommends that it is not set higher than
   * 65535 bytes, so that seems like a reasonable limit for us as well.
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
new file mode 100644
index ba42753..6762a6b
*** a/src/backend/libpq/pqcomm.c
--- b/src/backend/libpq/pqcomm.c
*************** StreamConnection(pgsocket server_fd, Por
*** 740,749 ****
  		 * very large message needs to be sent, but we won't attempt to
  		 * enlarge the OS buffer if that happens, so somewhat arbitrarily
  		 * ensure that the OS buffer is at least PQ_SEND_BUFFER_SIZE * 4.
! 		 * (That's 32kB with the current default).
  		 *
! 		 * The default OS buffer size used to be 8kB in earlier Windows
! 		 * versions, but was raised to 64kB in Windows 2012.  So it shouldn't
  		 * be necessary to change it in later versions anymore.  Changing it
  		 * unnecessarily can even reduce performance, because setting
  		 * SO_SNDBUF in the application disables the "dynamic send buffering"
--- 740,749 ----
  		 * very large message needs to be sent, but we won't attempt to
  		 * enlarge the OS buffer if that happens, so somewhat arbitrarily
  		 * ensure that the OS buffer is at least PQ_SEND_BUFFER_SIZE * 4.
! 		 * (That's 32KB with the current default).
  		 *
! 		 * The default OS buffer size used to be 8KB in earlier Windows
! 		 * versions, but was raised to 64KB in Windows 2012.  So it shouldn't
  		 * be necessary to change it in later versions anymore.  Changing it
  		 * unnecessarily can even reduce performance, because setting
  		 * SO_SNDBUF in the application disables the "dynamic send buffering"
diff --git a/src/backend/main/main.c b/src/backend/main/main.c
new file mode 100644
index c018c90..3338843
*** a/src/backend/main/main.c
--- b/src/backend/main/main.c
*************** help(const char *progname)
*** 345,351 ****
  	printf(_("  -o OPTIONS         pass \"OPTIONS\" to each server process (obsolete)\n"));
  	printf(_("  -p PORT            port number to listen on\n"));
  	printf(_("  -s                 show statistics after each query\n"));
! 	printf(_("  -S WORK-MEM        set amount of memory for sorts (in kB)\n"));
  	printf(_("  -V, --version      output version information, then exit\n"));
  	printf(_("  --NAME=VALUE       set run-time parameter\n"));
  	printf(_("  --describe-config  describe configuration parameters, then exit\n"));
--- 345,351 ----
  	printf(_("  -o OPTIONS         pass \"OPTIONS\" to each server process (obsolete)\n"));
  	printf(_("  -p PORT            port number to listen on\n"));
  	printf(_("  -s                 show statistics after each query\n"));
! 	printf(_("  -S WORK-MEM        set amount of memory for sorts (in KB)\n"));
  	printf(_("  -V, --version      output version information, then exit\n"));
  	printf(_("  --NAME=VALUE       set run-time parameter\n"));
  	printf(_("  --describe-config  describe configuration parameters, then exit\n"));
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
new file mode 100644
index a0dba19..b560164
*** a/src/backend/replication/walsender.c
--- b/src/backend/replication/walsender.c
***************
*** 87,93 ****
   * We don't have a good idea of what a good value would be; there's some
   * overhead per message in both walsender and walreceiver, but on the other
   * hand sending large batches makes walsender less responsive to signals
!  * because signals are checked only between messages.  128kB (with
   * default 8k blocks) seems like a reasonable guess for now.
   */
  #define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
--- 87,93 ----
   * We don't have a good idea of what a good value would be; there's some
   * overhead per message in both walsender and walreceiver, but on the other
   * hand sending large batches makes walsender less responsive to signals
!  * because signals are checked only between messages.  128KB (with
   * default 8k blocks) seems like a reasonable guess for now.
   */
  #define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
new file mode 100644
index 03143f1..36480b5
*** a/src/backend/storage/file/fd.c
--- b/src/backend/storage/file/fd.c
*************** FileWrite(File file, char *buffer, int a
*** 1653,1659 ****
  			if (newTotal > (uint64) temp_file_limit * (uint64) 1024)
  				ereport(ERROR,
  						(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
! 				 errmsg("temporary file size exceeds temp_file_limit (%dkB)",
  						temp_file_limit)));
  		}
  	}
--- 1653,1659 ----
  			if (newTotal > (uint64) temp_file_limit * (uint64) 1024)
  				ereport(ERROR,
  						(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
! 				 errmsg("temporary file size exceeds temp_file_limit (%dKB)",
  						temp_file_limit)));
  		}
  	}
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
new file mode 100644
index b185c1b..10a2d24
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*************** check_stack_depth(void)
*** 3114,3120 ****
  		ereport(ERROR,
  				(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
  				 errmsg("stack depth limit exceeded"),
! 				 errhint("Increase the configuration parameter \"max_stack_depth\" (currently %dkB), "
  			  "after ensuring the platform's stack depth limit is adequate.",
  						 max_stack_depth)));
  	}
--- 3114,3120 ----
  		ereport(ERROR,
  				(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
  				 errmsg("stack depth limit exceeded"),
! 				 errhint("Increase the configuration parameter \"max_stack_depth\" (currently %dKB), "
  			  "after ensuring the platform's stack depth limit is adequate.",
  						 max_stack_depth)));
  	}
*************** check_max_stack_depth(int *newval, void
*** 3177,3183 ****
  
  	if (stack_rlimit > 0 && newval_bytes > stack_rlimit - STACK_DEPTH_SLOP)
  	{
! 		GUC_check_errdetail("\"max_stack_depth\" must not exceed %ldkB.",
  							(stack_rlimit - STACK_DEPTH_SLOP) / 1024L);
  		GUC_check_errhint("Increase the platform's stack depth limit via \"ulimit -s\" or local equivalent.");
  		return false;
--- 3177,3183 ----
  
  	if (stack_rlimit > 0 && newval_bytes > stack_rlimit - STACK_DEPTH_SLOP)
  	{
! 		GUC_check_errdetail("\"max_stack_depth\" must not exceed %ldKB.",
  							(stack_rlimit - STACK_DEPTH_SLOP) / 1024L);
  		GUC_check_errhint("Increase the platform's stack depth limit via \"ulimit -s\" or local equivalent.");
  		return false;
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 0776f3b..a92d4e4
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
*************** pg_size_pretty(PG_FUNCTION_ARGS)
*** 542,565 ****
  	{
  		size >>= 9;				/* keep one extra bit for rounding */
  		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT " kB",
  					 half_rounded(size));
  		else
  		{
  			size >>= 10;
  			if (Abs(size) < limit2)
! 				snprintf(buf, sizeof(buf), INT64_FORMAT " MB",
  						 half_rounded(size));
  			else
  			{
  				size >>= 10;
  				if (Abs(size) < limit2)
! 					snprintf(buf, sizeof(buf), INT64_FORMAT " GB",
  							 half_rounded(size));
  				else
  				{
  					size >>= 10;
! 					snprintf(buf, sizeof(buf), INT64_FORMAT " TB",
  							 half_rounded(size));
  				}
  			}
--- 542,565 ----
  	{
  		size >>= 9;				/* keep one extra bit for rounding */
  		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT "KB",
  					 half_rounded(size));
  		else
  		{
  			size >>= 10;
  			if (Abs(size) < limit2)
! 				snprintf(buf, sizeof(buf), INT64_FORMAT "MB",
  						 half_rounded(size));
  			else
  			{
  				size >>= 10;
  				if (Abs(size) < limit2)
! 					snprintf(buf, sizeof(buf), INT64_FORMAT "GB",
  							 half_rounded(size));
  				else
  				{
  					size >>= 10;
! 					snprintf(buf, sizeof(buf), INT64_FORMAT "TB",
  							 half_rounded(size));
  				}
  			}
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 664,670 ****
  		if (numeric_is_less(numeric_absolute(size), limit2))
  		{
  			size = numeric_half_rounded(size);
! 			result = psprintf("%s kB", numeric_to_cstring(size));
  		}
  		else
  		{
--- 664,670 ----
  		if (numeric_is_less(numeric_absolute(size), limit2))
  		{
  			size = numeric_half_rounded(size);
! 			result = psprintf("%sKB", numeric_to_cstring(size));
  		}
  		else
  		{
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 673,679 ****
  			if (numeric_is_less(numeric_absolute(size), limit2))
  			{
  				size = numeric_half_rounded(size);
! 				result = psprintf("%s MB", numeric_to_cstring(size));
  			}
  			else
  			{
--- 673,679 ----
  			if (numeric_is_less(numeric_absolute(size), limit2))
  			{
  				size = numeric_half_rounded(size);
! 				result = psprintf("%sMB", numeric_to_cstring(size));
  			}
  			else
  			{
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 683,696 ****
  				if (numeric_is_less(numeric_absolute(size), limit2))
  				{
  					size = numeric_half_rounded(size);
! 					result = psprintf("%s GB", numeric_to_cstring(size));
  				}
  				else
  				{
  					/* size >>= 10 */
  					size = numeric_shift_right(size, 10);
  					size = numeric_half_rounded(size);
! 					result = psprintf("%s TB", numeric_to_cstring(size));
  				}
  			}
  		}
--- 683,696 ----
  				if (numeric_is_less(numeric_absolute(size), limit2))
  				{
  					size = numeric_half_rounded(size);
! 					result = psprintf("%sGB", numeric_to_cstring(size));
  				}
  				else
  				{
  					/* size >>= 10 */
  					size = numeric_shift_right(size, 10);
  					size = numeric_half_rounded(size);
! 					result = psprintf("%sTB", numeric_to_cstring(size));
  				}
  			}
  		}
*************** pg_size_bytes(PG_FUNCTION_ARGS)
*** 830,836 ****
  					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
  					 errmsg("invalid size: \"%s\"", text_to_cstring(arg)),
  					 errdetail("Invalid size unit: \"%s\".", strptr),
! 					 errhint("Valid units are \"bytes\", \"kB\", \"MB\", \"GB\", and \"TB\".")));
  
  		if (multiplier > 1)
  		{
--- 830,836 ----
  					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
  					 errmsg("invalid size: \"%s\"", text_to_cstring(arg)),
  					 errdetail("Invalid size unit: \"%s\".", strptr),
! 					 errhint("Valid units are \"bytes\", \"KB\", \"MB\", \"GB\", and \"TB\".")));
  
  		if (multiplier > 1)
  		{
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
new file mode 100644
index 6ac5184..798f651
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
*************** const char *const config_type_names[] =
*** 675,681 ****
  
  typedef struct
  {
! 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "kB" or
  										 * "min" */
  	int			base_unit;		/* GUC_UNIT_XXX */
  	int			multiplier;		/* If positive, multiply the value with this
--- 675,681 ----
  
  typedef struct
  {
! 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "KB" or
  										 * "min" */
  	int			base_unit;		/* GUC_UNIT_XXX */
  	int			multiplier;		/* If positive, multiply the value with this
*************** typedef struct
*** 694,722 ****
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"kB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
  	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
  	{"GB", GUC_UNIT_KB, 1024 * 1024},
  	{"MB", GUC_UNIT_KB, 1024},
! 	{"kB", GUC_UNIT_KB, 1},
  
  	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
! 	{"kB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
! 	{"kB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XSEGS, (1024 * 1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"GB", GUC_UNIT_XSEGS, (1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"MB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / (1024 * 1024))},
! 	{"kB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / 1024)},
  
  	{""}						/* end of table marker */
  };
--- 694,722 ----
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"KB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
  	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
  	{"GB", GUC_UNIT_KB, 1024 * 1024},
  	{"MB", GUC_UNIT_KB, 1024},
! 	{"KB", GUC_UNIT_KB, 1},
  
  	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
! 	{"KB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
! 	{"KB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XSEGS, (1024 * 1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"GB", GUC_UNIT_XSEGS, (1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"MB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / (1024 * 1024))},
! 	{"KB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / 1024)},
  
  	{""}						/* end of table marker */
  };
*************** static struct config_int ConfigureNamesI
*** 1930,1936 ****
  	},
  
  	/*
! 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
  	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
  	 * possible, depending on the actual platform-specific stack limit.
  	 */
--- 1930,1936 ----
  	},
  
  	/*
! 	 * We use the hopefully-safely-small value of 100KB as the compiled-in
  	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
  	 * possible, depending on the actual platform-specific stack limit.
  	 */
*************** static struct config_int ConfigureNamesI
*** 2739,2745 ****
  			gettext_noop("Sets the planner's assumption about the size of the disk cache."),
  			gettext_noop("That is, the portion of the kernel's disk cache that "
  						 "will be used for PostgreSQL data files. This is measured in disk "
! 						 "pages, which are normally 8 kB each."),
  			GUC_UNIT_BLOCKS,
  		},
  		&effective_cache_size,
--- 2739,2745 ----
  			gettext_noop("Sets the planner's assumption about the size of the disk cache."),
  			gettext_noop("That is, the portion of the kernel's disk cache that "
  						 "will be used for PostgreSQL data files. This is measured in disk "
! 						 "pages, which are normally 8KB each."),
  			GUC_UNIT_BLOCKS,
  		},
  		&effective_cache_size,
*************** ReportGUCOption(struct config_generic *
*** 5301,5307 ****
  }
  
  /*
!  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
   * to the given base unit.  'value' and 'unit' are the input value and unit
   * to convert from.  The converted value is stored in *base_value.
   *
--- 5301,5307 ----
  }
  
  /*
!  * Convert a value from one of the human-friendly units ("KB", "min" etc.)
   * to the given base unit.  'value' and 'unit' are the input value and unit
   * to convert from.  The converted value is stored in *base_value.
   *
*************** convert_to_base_unit(int64 value, const
*** 5322,5328 ****
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			strcmp(unit, table[i].unit) == 0)
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
--- 5322,5331 ----
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			(strcmp(unit, table[i].unit) == 0 ||
! 			 /* support pre-PG 10 SI/metric syntax */
! 			 (strcmp(unit, "kB") == 0 &&
! 			  strcmp(table[i].unit, "KB") == 0)))
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
*************** convert_to_base_unit(int64 value, const
*** 5338,5344 ****
   * Convert a value in some base unit to a human-friendly unit.  The output
   * unit is chosen so that it's the greatest unit that can represent the value
   * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
!  * converted to 1 MB, but 1025 is represented as 1025 kB.
   */
  static void
  convert_from_base_unit(int64 base_value, int base_unit,
--- 5341,5347 ----
   * Convert a value in some base unit to a human-friendly unit.  The output
   * unit is chosen so that it's the greatest unit that can represent the value
   * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
!  * converted to 1 MB, but 1025 is represented as 1025KB.
   */
  static void
  convert_from_base_unit(int64 base_value, int base_unit,
*************** GetConfigOptionByNum(int varnum, const c
*** 7999,8012 ****
  		switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))
  		{
  			case GUC_UNIT_KB:
! 				values[2] = "kB";
  				break;
  			case GUC_UNIT_BLOCKS:
! 				snprintf(buf, sizeof(buf), "%dkB", BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_XBLOCKS:
! 				snprintf(buf, sizeof(buf), "%dkB", XLOG_BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_MS:
--- 8002,8015 ----
  		switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))
  		{
  			case GUC_UNIT_KB:
! 				values[2] = "KB";
  				break;
  			case GUC_UNIT_BLOCKS:
! 				snprintf(buf, sizeof(buf), "%dKB", BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_XBLOCKS:
! 				snprintf(buf, sizeof(buf), "%dKB", XLOG_BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_MS:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
new file mode 100644
index 6d0666c..92e4264
*** a/src/backend/utils/misc/postgresql.conf.sample
--- b/src/backend/utils/misc/postgresql.conf.sample
***************
*** 24,30 ****
  # "postgres -c log_connections=on".  Some parameters can be changed at run time
  # with the "SET" SQL command.
  #
! # Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
  #                MB = megabytes                     s   = seconds
  #                GB = gigabytes                     min = minutes
  #                TB = terabytes                     h   = hours
--- 24,30 ----
  # "postgres -c log_connections=on".  Some parameters can be changed at run time
  # with the "SET" SQL command.
  #
! # Memory units:  KB = kilobytes        Time units:  ms  = milliseconds
  #                MB = megabytes                     s   = seconds
  #                GB = gigabytes                     min = minutes
  #                TB = terabytes                     h   = hours
***************
*** 110,129 ****
  
  # - Memory -
  
! #shared_buffers = 32MB			# min 128kB
  					# (change requires restart)
  #huge_pages = try			# on, off, or try
  					# (change requires restart)
! #temp_buffers = 8MB			# min 800kB
  #max_prepared_transactions = 0		# zero disables the feature
  					# (change requires restart)
  # Caution: it is not advisable to set max_prepared_transactions nonzero unless
  # you actively intend to use prepared transactions.
! #work_mem = 4MB				# min 64kB
  #maintenance_work_mem = 64MB		# min 1MB
  #replacement_sort_tuples = 150000	# limits use of replacement selection sort
  #autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
! #max_stack_depth = 2MB			# min 100kB
  #dynamic_shared_memory_type = posix	# the default is the first option
  					# supported by the operating system:
  					#   posix
--- 110,129 ----
  
  # - Memory -
  
! #shared_buffers = 32MB			# min 128KB
  					# (change requires restart)
  #huge_pages = try			# on, off, or try
  					# (change requires restart)
! #temp_buffers = 8MB			# min 800KB
  #max_prepared_transactions = 0		# zero disables the feature
  					# (change requires restart)
  # Caution: it is not advisable to set max_prepared_transactions nonzero unless
  # you actively intend to use prepared transactions.
! #work_mem = 4MB				# min 64KB
  #maintenance_work_mem = 64MB		# min 1MB
  #replacement_sort_tuples = 150000	# limits use of replacement selection sort
  #autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
! #max_stack_depth = 2MB			# min 100KB
  #dynamic_shared_memory_type = posix	# the default is the first option
  					# supported by the operating system:
  					#   posix
***************
*** 135,141 ****
  # - Disk -
  
  #temp_file_limit = -1			# limits per-process temp file space
! 					# in kB, or -1 for no limit
  
  # - Kernel Resource Usage -
  
--- 135,141 ----
  # - Disk -
  
  #temp_file_limit = -1			# limits per-process temp file space
! 					# in KB, or -1 for no limit
  
  # - Kernel Resource Usage -
  
***************
*** 157,163 ****
  #bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
  #bgwriter_lru_multiplier = 2.0		# 0-10.0 multiplier on buffers scanned/round
  #bgwriter_flush_after = 0		# 0 disables,
! 					# default is 512kB on linux, 0 otherwise
  
  # - Asynchronous Behavior -
  
--- 157,163 ----
  #bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
  #bgwriter_lru_multiplier = 2.0		# 0-10.0 multiplier on buffers scanned/round
  #bgwriter_flush_after = 0		# 0 disables,
! 					# default is 512KB on linux, 0 otherwise
  
  # - Asynchronous Behavior -
  
***************
*** 193,199 ****
  #wal_compression = off			# enable compression of full-page writes
  #wal_log_hints = off			# also do full page writes of non-critical updates
  					# (change requires restart)
! #wal_buffers = -1			# min 32kB, -1 sets based on shared_buffers
  					# (change requires restart)
  #wal_writer_delay = 200ms		# 1-10000 milliseconds
  #wal_writer_flush_after = 1MB		# 0 disables
--- 193,199 ----
  #wal_compression = off			# enable compression of full-page writes
  #wal_log_hints = off			# also do full page writes of non-critical updates
  					# (change requires restart)
! #wal_buffers = -1			# min 32KB, -1 sets based on shared_buffers
  					# (change requires restart)
  #wal_writer_delay = 200ms		# 1-10000 milliseconds
  #wal_writer_flush_after = 1MB		# 0 disables
***************
*** 208,214 ****
  #min_wal_size = 80MB
  #checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
  #checkpoint_flush_after = 0		# 0 disables,
! 					# default is 256kB on linux, 0 otherwise
  #checkpoint_warning = 30s		# 0 disables
  
  # - Archiving -
--- 208,214 ----
  #min_wal_size = 80MB
  #checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
  #checkpoint_flush_after = 0		# 0 disables,
! 					# default is 256KB on linux, 0 otherwise
  #checkpoint_warning = 30s		# 0 disables
  
  # - Archiving -
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
new file mode 100644
index 73cb7ee..dc102fa
*** a/src/bin/initdb/initdb.c
--- b/src/bin/initdb/initdb.c
*************** test_config_settings(void)
*** 1180,1186 ****
  	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
  		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		printf("%dkB\n", n_buffers * (BLCKSZ / 1024));
  
  	printf(_("selecting dynamic shared memory implementation ... "));
  	fflush(stdout);
--- 1180,1186 ----
  	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
  		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		printf("%dKB\n", n_buffers * (BLCKSZ / 1024));
  
  	printf(_("selecting dynamic shared memory implementation ... "));
  	fflush(stdout);
*************** setup_config(void)
*** 1214,1220 ****
  		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
  				 (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dkB",
  				 n_buffers * (BLCKSZ / 1024));
  	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
  
--- 1214,1220 ----
  		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
  				 (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dKB",
  				 n_buffers * (BLCKSZ / 1024));
  	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
  
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index ec69682..d41e330
*** a/src/bin/pg_basebackup/pg_basebackup.c
--- b/src/bin/pg_basebackup/pg_basebackup.c
*************** usage(void)
*** 236,242 ****
  	printf(_("  -D, --pgdata=DIRECTORY receive base backup into directory\n"));
  	printf(_("  -F, --format=p|t       output format (plain (default), tar)\n"));
  	printf(_("  -r, --max-rate=RATE    maximum transfer rate to transfer data directory\n"
! 	  "                         (in kB/s, or use suffix \"k\" or \"M\")\n"));
  	printf(_("  -R, --write-recovery-conf\n"
  			 "                         write recovery.conf after backup\n"));
  	printf(_("  -S, --slot=SLOTNAME    replication slot to use\n"));
--- 236,242 ----
  	printf(_("  -D, --pgdata=DIRECTORY receive base backup into directory\n"));
  	printf(_("  -F, --format=p|t       output format (plain (default), tar)\n"));
  	printf(_("  -r, --max-rate=RATE    maximum transfer rate to transfer data directory\n"
! 	  "                         (in KB/s, or use suffix \"k\" or \"M\")\n"));
  	printf(_("  -R, --write-recovery-conf\n"
  			 "                         write recovery.conf after backup\n"));
  	printf(_("  -S, --slot=SLOTNAME    replication slot to use\n"));
*************** progress_report(int tablespacenum, const
*** 601,608 ****
  			 * call)
  			 */
  			fprintf(stderr,
! 					ngettext("%*s/%s kB (100%%), %d/%d tablespace %*s",
! 							 "%*s/%s kB (100%%), %d/%d tablespaces %*s",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str,
--- 601,608 ----
  			 * call)
  			 */
  			fprintf(stderr,
! 					ngettext("%*s/%s KB (100%%), %d/%d tablespace %*s",
! 							 "%*s/%s KB (100%%), %d/%d tablespaces %*s",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str,
*************** progress_report(int tablespacenum, const
*** 613,620 ****
  			bool		truncate = (strlen(filename) > VERBOSE_FILENAME_LENGTH);
  
  			fprintf(stderr,
! 					ngettext("%*s/%s kB (%d%%), %d/%d tablespace (%s%-*.*s)",
! 							 "%*s/%s kB (%d%%), %d/%d tablespaces (%s%-*.*s)",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str, percent,
--- 613,620 ----
  			bool		truncate = (strlen(filename) > VERBOSE_FILENAME_LENGTH);
  
  			fprintf(stderr,
! 					ngettext("%*s/%s KB (%d%%), %d/%d tablespace (%s%-*.*s)",
! 							 "%*s/%s KB (%d%%), %d/%d tablespaces (%s%-*.*s)",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str, percent,
*************** progress_report(int tablespacenum, const
*** 629,636 ****
  	}
  	else
  		fprintf(stderr,
! 				ngettext("%*s/%s kB (%d%%), %d/%d tablespace",
! 						 "%*s/%s kB (%d%%), %d/%d tablespaces",
  						 tablespacecount),
  				(int) strlen(totalsize_str),
  				totaldone_str, totalsize_str, percent,
--- 629,636 ----
  	}
  	else
  		fprintf(stderr,
! 				ngettext("%*s/%s KB (%d%%), %d/%d tablespace",
! 						 "%*s/%s KB (%d%%), %d/%d tablespaces",
  						 tablespacecount),
  				(int) strlen(totalsize_str),
  				totaldone_str, totalsize_str, percent,
diff --git a/src/bin/pg_rewind/logging.c b/src/bin/pg_rewind/logging.c
new file mode 100644
index a232abb..6b728d9
*** a/src/bin/pg_rewind/logging.c
--- b/src/bin/pg_rewind/logging.c
*************** progress_report(bool force)
*** 137,143 ****
  	snprintf(fetch_size_str, sizeof(fetch_size_str), INT64_FORMAT,
  			 fetch_size / 1024);
  
! 	pg_log(PG_PROGRESS, "%*s/%s kB (%d%%) copied",
  		   (int) strlen(fetch_size_str), fetch_done_str, fetch_size_str,
  		   percent);
  	printf("\r");
--- 137,143 ----
  	snprintf(fetch_size_str, sizeof(fetch_size_str), INT64_FORMAT,
  			 fetch_size / 1024);
  
! 	pg_log(PG_PROGRESS, "%*s/%s KB (%d%%) copied",
  		   (int) strlen(fetch_size_str), fetch_done_str, fetch_size_str,
  		   percent);
  	printf("\r");
diff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c
new file mode 100644
index c842762..5fa1a45
*** a/src/bin/pg_test_fsync/pg_test_fsync.c
--- b/src/bin/pg_test_fsync/pg_test_fsync.c
*************** test_sync(int writes_per_op)
*** 239,247 ****
  	bool		fs_warning = false;
  
  	if (writes_per_op == 1)
! 		printf("\nCompare file sync methods using one %dkB write:\n", XLOG_BLCKSZ_K);
  	else
! 		printf("\nCompare file sync methods using two %dkB writes:\n", XLOG_BLCKSZ_K);
  	printf("(in wal_sync_method preference order, except fdatasync is Linux's default)\n");
  
  	/*
--- 239,247 ----
  	bool		fs_warning = false;
  
  	if (writes_per_op == 1)
! 		printf("\nCompare file sync methods using one %dKB write:\n", XLOG_BLCKSZ_K);
  	else
! 		printf("\nCompare file sync methods using two %dKB writes:\n", XLOG_BLCKSZ_K);
  	printf("(in wal_sync_method preference order, except fdatasync is Linux's default)\n");
  
  	/*
*************** static void
*** 395,408 ****
  test_open_syncs(void)
  {
  	printf("\nCompare open_sync with different write sizes:\n");
! 	printf("(This is designed to compare the cost of writing 16kB in different write\n"
  		   "open_sync sizes.)\n");
  
! 	test_open_sync(" 1 * 16kB open_sync write", 16);
! 	test_open_sync(" 2 *  8kB open_sync writes", 8);
! 	test_open_sync(" 4 *  4kB open_sync writes", 4);
! 	test_open_sync(" 8 *  2kB open_sync writes", 2);
! 	test_open_sync("16 *  1kB open_sync writes", 1);
  }
  
  /*
--- 395,408 ----
  test_open_syncs(void)
  {
  	printf("\nCompare open_sync with different write sizes:\n");
! 	printf("(This is designed to compare the cost of writing 16KB in different write\n"
  		   "open_sync sizes.)\n");
  
! 	test_open_sync(" 1 * 16KB open_sync write", 16);
! 	test_open_sync(" 2 *  8KB open_sync writes", 8);
! 	test_open_sync(" 4 *  4KB open_sync writes", 4);
! 	test_open_sync(" 8 *  2KB open_sync writes", 2);
! 	test_open_sync("16 *  1KB open_sync writes", 1);
  }
  
  /*
*************** test_non_sync(void)
*** 521,527 ****
  	/*
  	 * Test a simple write without fsync
  	 */
! 	printf("\nNon-sync'ed %dkB writes:\n", XLOG_BLCKSZ_K);
  	printf(LABEL_FORMAT, "write");
  	fflush(stdout);
  
--- 521,527 ----
  	/*
  	 * Test a simple write without fsync
  	 */
! 	printf("\nNon-sync'ed %dKB writes:\n", XLOG_BLCKSZ_K);
  	printf(LABEL_FORMAT, "write");
  	fflush(stdout);
  
diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h
new file mode 100644
index 6d0e12b..425768e
*** a/src/include/executor/hashjoin.h
--- b/src/include/executor/hashjoin.h
*************** typedef struct HashSkewBucket
*** 104,110 ****
  
  /*
   * To reduce palloc overhead, the HashJoinTuples for the current batch are
!  * packed in 32kB buffers instead of pallocing each tuple individually.
   */
  typedef struct HashMemoryChunkData
  {
--- 104,110 ----
  
  /*
   * To reduce palloc overhead, the HashJoinTuples for the current batch are
!  * packed in 32KB buffers instead of pallocing each tuple individually.
   */
  typedef struct HashMemoryChunkData
  {
diff --git a/src/test/regress/expected/dbsize.out b/src/test/regress/expected/dbsize.out
new file mode 100644
index 20d8cb5..0cb09a5
*** a/src/test/regress/expected/dbsize.out
--- b/src/test/regress/expected/dbsize.out
*************** SELECT size, pg_size_pretty(size), pg_si
*** 6,15 ****
  ------------------+----------------+----------------
                 10 | 10 bytes       | -10 bytes
               1000 | 1000 bytes     | -1000 bytes
!           1000000 | 977 kB         | -977 kB
!        1000000000 | 954 MB         | -954 MB
!     1000000000000 | 931 GB         | -931 GB
!  1000000000000000 | 909 TB         | -909 TB
  (6 rows)
  
  SELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM
--- 6,15 ----
  ------------------+----------------+----------------
                 10 | 10 bytes       | -10 bytes
               1000 | 1000 bytes     | -1000 bytes
!           1000000 | 977KB          | -977KB
!        1000000000 | 954MB          | -954MB
!     1000000000000 | 931GB          | -931GB
!  1000000000000000 | 909TB          | -909TB
  (6 rows)
  
  SELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM
*************** SELECT size, pg_size_pretty(size), pg_si
*** 23,48 ****
  --------------------+----------------+----------------
                   10 | 10 bytes       | -10 bytes
                 1000 | 1000 bytes     | -1000 bytes
!             1000000 | 977 kB         | -977 kB
!          1000000000 | 954 MB         | -954 MB
!       1000000000000 | 931 GB         | -931 GB
!    1000000000000000 | 909 TB         | -909 TB
                 10.5 | 10.5 bytes     | -10.5 bytes
               1000.5 | 1000.5 bytes   | -1000.5 bytes
!           1000000.5 | 977 kB         | -977 kB
!        1000000000.5 | 954 MB         | -954 MB
!     1000000000000.5 | 931 GB         | -931 GB
!  1000000000000000.5 | 909 TB         | -909 TB
  (12 rows)
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1kB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
     size   |  pg_size_bytes   
  ----------+------------------
   1        |                1
   123bytes |              123
!  1kB      |             1024
   1MB      |          1048576
    1 GB    |       1073741824
   1.5 GB   |       1610612736
--- 23,48 ----
  --------------------+----------------+----------------
                   10 | 10 bytes       | -10 bytes
                 1000 | 1000 bytes     | -1000 bytes
!             1000000 | 977KB          | -977KB
!          1000000000 | 954MB          | -954MB
!       1000000000000 | 931GB          | -931GB
!    1000000000000000 | 909TB          | -909TB
                 10.5 | 10.5 bytes     | -10.5 bytes
               1000.5 | 1000.5 bytes   | -1000.5 bytes
!           1000000.5 | 977KB          | -977KB
!        1000000000.5 | 954MB          | -954MB
!     1000000000000.5 | 931GB          | -931GB
!  1000000000000000.5 | 909TB          | -909TB
  (12 rows)
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1KB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
     size   |  pg_size_bytes   
  ----------+------------------
   1        |                1
   123bytes |              123
!  1KB      |             1024
   1MB      |          1048576
    1 GB    |       1073741824
   1.5 GB   |       1610612736
*************** SELECT size, pg_size_bytes(size) FROM
*** 105,119 ****
  SELECT pg_size_bytes('1 AB');
  ERROR:  invalid size: "1 AB"
  DETAIL:  Invalid size unit: "AB".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A');
  ERROR:  invalid size: "1 AB A"
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A    ');
  ERROR:  invalid size: "1 AB A    "
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('9223372036854775807.9');
  ERROR:  bigint out of range
  SELECT pg_size_bytes('1e100');
--- 105,119 ----
  SELECT pg_size_bytes('1 AB');
  ERROR:  invalid size: "1 AB"
  DETAIL:  Invalid size unit: "AB".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A');
  ERROR:  invalid size: "1 AB A"
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A    ');
  ERROR:  invalid size: "1 AB A    "
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('9223372036854775807.9');
  ERROR:  bigint out of range
  SELECT pg_size_bytes('1e100');
*************** ERROR:  invalid size: "1e100000000000000
*** 123,129 ****
  SELECT pg_size_bytes('1 byte');  -- the singular "byte" is not supported
  ERROR:  invalid size: "1 byte"
  DETAIL:  Invalid size unit: "byte".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('');
  ERROR:  invalid size: ""
  SELECT pg_size_bytes('kb');
--- 123,129 ----
  SELECT pg_size_bytes('1 byte');  -- the singular "byte" is not supported
  ERROR:  invalid size: "1 byte"
  DETAIL:  Invalid size unit: "byte".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('');
  ERROR:  invalid size: ""
  SELECT pg_size_bytes('kb');
*************** SELECT pg_size_bytes('-. kb');
*** 138,146 ****
  ERROR:  invalid size: "-. kb"
  SELECT pg_size_bytes('.+912');
  ERROR:  invalid size: ".+912"
! SELECT pg_size_bytes('+912+ kB');
! ERROR:  invalid size: "+912+ kB"
! DETAIL:  Invalid size unit: "+ kB".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
! SELECT pg_size_bytes('++123 kB');
! ERROR:  invalid size: "++123 kB"
--- 138,146 ----
  ERROR:  invalid size: "-. kb"
  SELECT pg_size_bytes('.+912');
  ERROR:  invalid size: ".+912"
! SELECT pg_size_bytes('+912+ KB');
! ERROR:  invalid size: "+912+ KB"
! DETAIL:  Invalid size unit: "+ KB".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
! SELECT pg_size_bytes('++123 KB');
! ERROR:  invalid size: "++123 KB"
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
new file mode 100644
index d9bbae0..a04a117
*** a/src/test/regress/expected/join.out
--- b/src/test/regress/expected/join.out
*************** reset enable_nestloop;
*** 2365,2371 ****
  --
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
! set work_mem to '64kB';
  set enable_mergejoin to off;
  explain (costs off)
  select count(*) from tenk1 a, tenk1 b
--- 2365,2371 ----
  --
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
! set work_mem to '64KB';
  set enable_mergejoin to off;
  explain (costs off)
  select count(*) from tenk1 a, tenk1 b
diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out
new file mode 100644
index efcdc41..6679203
*** a/src/test/regress/expected/json.out
--- b/src/test/regress/expected/json.out
*************** LINE 1: SELECT '{"abc":1,3}'::json;
*** 203,215 ****
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::json;			-- OK
--- 203,215 ----
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::json;			-- OK
diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out
new file mode 100644
index a6d25de..289dae5
*** a/src/test/regress/expected/jsonb.out
--- b/src/test/regress/expected/jsonb.out
*************** LINE 1: SELECT '{"abc":1,3}'::jsonb;
*** 203,215 ****
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::jsonb;			-- OK
--- 203,215 ----
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::jsonb;			-- OK
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
new file mode 100644
index f06cfa4..5bc8b58
*** a/src/test/regress/expected/rangefuncs.out
--- b/src/test/regress/expected/rangefuncs.out
*************** create function foo1(n integer, out a te
*** 1772,1778 ****
    returns setof record
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
! set work_mem='64kB';
  select t.a, t, t.a from foo1(10000) t limit 1;
     a   |         t         |   a   
  -------+-------------------+-------
--- 1772,1778 ----
    returns setof record
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
! set work_mem='64KB';
  select t.a, t, t.a from foo1(10000) t limit 1;
     a   |         t         |   a   
  -------+-------------------+-------
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
new file mode 100644
index 574f5b8..4271ffa
*** a/src/test/regress/pg_regress.c
--- b/src/test/regress/pg_regress.c
*************** regression_main(int argc, char *argv[],
*** 2248,2254 ****
  		fputs("log_autovacuum_min_duration = 0\n", pg_conf);
  		fputs("log_checkpoints = on\n", pg_conf);
  		fputs("log_lock_waits = on\n", pg_conf);
! 		fputs("log_temp_files = 128kB\n", pg_conf);
  		fputs("max_prepared_transactions = 2\n", pg_conf);
  
  		for (sl = temp_configs; sl != NULL; sl = sl->next)
--- 2248,2254 ----
  		fputs("log_autovacuum_min_duration = 0\n", pg_conf);
  		fputs("log_checkpoints = on\n", pg_conf);
  		fputs("log_lock_waits = on\n", pg_conf);
! 		fputs("log_temp_files = 128KB\n", pg_conf);
  		fputs("max_prepared_transactions = 2\n", pg_conf);
  
  		for (sl = temp_configs; sl != NULL; sl = sl->next)
diff --git a/src/test/regress/sql/dbsize.sql b/src/test/regress/sql/dbsize.sql
new file mode 100644
index d10a4d7..d34d71d
*** a/src/test/regress/sql/dbsize.sql
--- b/src/test/regress/sql/dbsize.sql
*************** SELECT size, pg_size_pretty(size), pg_si
*** 12,18 ****
              (1000000000000000.5::numeric)) x(size);
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1kB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
  
  -- case-insensitive units are supported
--- 12,18 ----
              (1000000000000000.5::numeric)) x(size);
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1KB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
  
  -- case-insensitive units are supported
*************** SELECT pg_size_bytes('-.kb');
*** 47,51 ****
  SELECT pg_size_bytes('-. kb');
  
  SELECT pg_size_bytes('.+912');
! SELECT pg_size_bytes('+912+ kB');
! SELECT pg_size_bytes('++123 kB');
--- 47,51 ----
  SELECT pg_size_bytes('-. kb');
  
  SELECT pg_size_bytes('.+912');
! SELECT pg_size_bytes('+912+ KB');
! SELECT pg_size_bytes('++123 KB');
diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql
new file mode 100644
index 97bccec..2e680a6
*** a/src/test/regress/sql/join.sql
--- b/src/test/regress/sql/join.sql
*************** reset enable_nestloop;
*** 484,490 ****
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
  
! set work_mem to '64kB';
  set enable_mergejoin to off;
  
  explain (costs off)
--- 484,490 ----
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
  
! set work_mem to '64KB';
  set enable_mergejoin to off;
  
  explain (costs off)
diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql
new file mode 100644
index 603288b..201689e
*** a/src/test/regress/sql/json.sql
--- b/src/test/regress/sql/json.sql
*************** SELECT '{"abc":1:2}'::json;		-- ERROR, c
*** 42,48 ****
  SELECT '{"abc":1,3}'::json;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::json;
  SELECT repeat('{"a":', 10000)::json;
  RESET max_stack_depth;
--- 42,48 ----
  SELECT '{"abc":1,3}'::json;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::json;
  SELECT repeat('{"a":', 10000)::json;
  RESET max_stack_depth;
diff --git a/src/test/regress/sql/jsonb.sql b/src/test/regress/sql/jsonb.sql
new file mode 100644
index b84bd70..090478d
*** a/src/test/regress/sql/jsonb.sql
--- b/src/test/regress/sql/jsonb.sql
*************** SELECT '{"abc":1:2}'::jsonb;		-- ERROR,
*** 42,48 ****
  SELECT '{"abc":1,3}'::jsonb;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::jsonb;
  SELECT repeat('{"a":', 10000)::jsonb;
  RESET max_stack_depth;
--- 42,48 ----
  SELECT '{"abc":1,3}'::jsonb;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::jsonb;
  SELECT repeat('{"a":', 10000)::jsonb;
  RESET max_stack_depth;
diff --git a/src/test/regress/sql/rangefuncs.sql b/src/test/regress/sql/rangefuncs.sql
new file mode 100644
index c8edc55..3a08f66
*** a/src/test/regress/sql/rangefuncs.sql
--- b/src/test/regress/sql/rangefuncs.sql
*************** create function foo1(n integer, out a te
*** 484,490 ****
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
  
! set work_mem='64kB';
  select t.a, t, t.a from foo1(10000) t limit 1;
  reset work_mem;
  select t.a, t, t.a from foo1(10000) t limit 1;
--- 484,490 ----
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
  
! set work_mem='64KB';
  select t.a, t, t.a from foo1(10000) t limit 1;
  reset work_mem;
  select t.a, t, t.a from foo1(10000) t limit 1;
diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
new file mode 100644
index f046687..04f9560
*** a/src/tools/msvc/config_default.pl
--- b/src/tools/msvc/config_default.pl
*************** our $config = {
*** 10,17 ****
  	# float8byval=> $platformbits == 64, # --disable-float8-byval,
  	# off by default on 32 bit platforms, on by default on 64 bit platforms
  
! 	# blocksize => 8,         # --with-blocksize, 8kB by default
! 	# wal_blocksize => 8,     # --with-wal-blocksize, 8kB by default
  	# wal_segsize => 16,      # --with-wal-segsize, 16MB by default
  	ldap      => 1,        # --with-ldap
  	extraver  => undef,    # --with-extra-version=<string>
--- 10,17 ----
  	# float8byval=> $platformbits == 64, # --disable-float8-byval,
  	# off by default on 32 bit platforms, on by default on 64 bit platforms
  
! 	# blocksize => 8,         # --with-blocksize, 8KB by default
! 	# wal_blocksize => 8,     # --with-wal-blocksize, 8KB by default
  	# wal_segsize => 16,      # --with-wal-segsize, 16MB by default
  	ldap      => 1,        # --with-ldap
  	extraver  => undef,    # --with-extra-version=<string>
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to