[HACKERS] Git diff patch in context diff format

2012-08-03 Thread Qi Huang



Hi, hackersI was exporting my project to a patch file. As the patch review 
requires, the patch needs to be in context diff format 
(http://wiki.postgresql.org/wiki/Reviewing_a_Patch). But the git diff exports 
in a format similar to unified format. What is everyone doing with patching 
currently? Is there any standard way?Thanks.

Best RegardsHuang Qi VictorComputer Science of National University of Singapore 
  
  

Re: [HACKERS] Git diff patch in context diff format

2012-08-03 Thread Robert Haas
On Fri, Aug 3, 2012 at 2:56 AM, Qi Huang huangq...@outlook.com wrote:
 Hi, hackers
 I was exporting my project to a patch file. As the patch review
 requires, the patch needs to be in context diff format
 (http://wiki.postgresql.org/wiki/Reviewing_a_Patch). But the git diff
 exports in a format similar to unified format. What is everyone doing with
 patching currently? Is there any standard way?

When I want a context diff, I just do:

git diff | filterdiff --format=context

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump custom format without timestamp?

2012-08-03 Thread Fabien COELHO


Hello,

I was looking into using hardlinks to reduce the storage of keeping and 
syncing periodic database dumps when they are identical. This works fine 
with the textual format, but not for the custom format because the file 
header includes a timestamp set by function WriteHead in file 
src/bin/pg_dump/pg_backup_archiver.c.


In order to circumvent this issue, I would think of adding a 
--no-timestamp option to pg_dump and put zeros everywhere in place of 
the actual timestamp in such case, and possibly ignoring the said 
timestamp in function ReadHead.


Any thoughts about this?

--
Fabien.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Make psql -1 file.sql work as with -f

2012-08-03 Thread Fabien COELHO


Here is a new submission with the expected context diff format.


Dear PostgreSQL developers,

Plese find attached a patch so that:

   Make psql -1  file.sql work as with -f

   Make psql --single-transaction option work on a non-interactive
   standard input as well, so that psql -1  input.sql behaves as
   psql -1 -f input.sql.

This saner/less error-prone behavior was discussed in this thread back in 
June:


http://archives.postgresql.org/pgsql-hackers/2012-06/msg00785.php

I have tested it manually and it works for me. I'm not sure this is the best 
possible implementation, but it is a small diff one. I haven't found a place 
in the regression tests where psql could be tested with different options. 
Did I miss something?


--
Fabien.*** a/doc/src/sgml/ref/psql-ref.sgml
--- b/doc/src/sgml/ref/psql-ref.sgml
***
*** 512,521  PostgreSQL documentation
listitem
 para
  When applicationpsql/application executes a script with the
! option-f/ option, adding this option wraps
! commandBEGIN//commandCOMMIT/ around the script to execute it
! as a single transaction.  This ensures that either all the commands
! complete successfully, or no changes are applied.
 /para
  
 para
--- 512,521 
listitem
 para
  When applicationpsql/application executes a script with the
! option-f/ option or from a non-interactive standard input, adding
! this option wraps commandBEGIN//commandCOMMIT/ around the script
! to execute it as a single transaction.  This ensures that either all
! the commands complete successfully, or no changes are applied.
 /para
  
 para
*** a/src/bin/psql/help.c
--- b/src/bin/psql/help.c
***
*** 97,103  usage(void)
  	printf(_(  -V, --versionoutput version information, then exit\n));
  	printf(_(  -X, --no-psqlrc  do not read startup file (~/.psqlrc)\n));
  	printf(_(  -1 (\one\), --single-transaction\n
! 			execute command file as a single transaction\n));
  	printf(_(  -?, --help   show this help, then exit\n));
  
  	printf(_(\nInput and output options:\n));
--- 97,104 
  	printf(_(  -V, --versionoutput version information, then exit\n));
  	printf(_(  -X, --no-psqlrc  do not read startup file (~/.psqlrc)\n));
  	printf(_(  -1 (\one\), --single-transaction\n
!execute command file or non-interactive stdin\n
!as a single transaction\n));
  	printf(_(  -?, --help   show this help, then exit\n));
  
  	printf(_(\nInput and output options:\n));
*** a/src/bin/psql/settings.h
--- b/src/bin/psql/settings.h
***
*** 76,81  typedef struct _psqlSettings
--- 76,82 
  
  	bool		notty;			/* stdin or stdout is not a tty (as determined
   * on startup) */
+ 	boolstdin_notty;/* stdin is not a tty (on startup) */
  	enum trivalue getPassword;	/* prompt the user for a username and password */
  	FILE	   *cur_cmd_source; /* describe the status of the current main
   * loop */
*** a/src/bin/psql/startup.c
--- b/src/bin/psql/startup.c
***
*** 133,139  main(int argc, char *argv[])
  	/* We must get COLUMNS here before readline() sets it */
  	pset.popt.topt.env_columns = getenv(COLUMNS) ? atoi(getenv(COLUMNS)) : 0;
  
! 	pset.notty = (!isatty(fileno(stdin)) || !isatty(fileno(stdout)));
  
  	pset.getPassword = TRI_DEFAULT;
  
--- 133,140 
  	/* We must get COLUMNS here before readline() sets it */
  	pset.popt.topt.env_columns = getenv(COLUMNS) ? atoi(getenv(COLUMNS)) : 0;
  
! 	pset.stdin_notty = !isatty(fileno(stdin));
! 	pset.notty = (pset.stdin_notty || !isatty(fileno(stdout)));
  
  	pset.getPassword = TRI_DEFAULT;
  
***
*** 314,320  main(int argc, char *argv[])
  		if (!pset.notty)
  			initializeInput(options.no_readline ? 0 : 1);
  
! 		successResult = MainLoop(stdin);
  	}
  
  	/* clean up */
--- 315,326 
  		if (!pset.notty)
  			initializeInput(options.no_readline ? 0 : 1);
  
! 		if (options.single_txn  pset.stdin_notty)
! 			/* stdin is NOT a tty and -1, process as a file */
! 			successResult = process_file(-, true, false);
! 		else
! 			/* no single transation, process as if interactive */
! 			successResult = MainLoop(stdin);
  	}
  
  	/* clean up */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git diff patch in context diff format

2012-08-03 Thread Qi Huang
Thanks for your suggestion, Robert.I just found this page 
(http://wiki.postgresql.org/wiki/Working_with_Git#Context_diffs_with_Git), 
quite useful and powerful. 
Best RegardsHuang Qi VictorComputer Science of National University of Singapore

 Date: Fri, 3 Aug 2012 04:28:42 -0400
 Subject: Re: [HACKERS] Git diff patch in context diff format
 From: robertmh...@gmail.com
 To: huangq...@outlook.com
 CC: pgsql-hackers@postgresql.org
 
 On Fri, Aug 3, 2012 at 2:56 AM, Qi Huang huangq...@outlook.com wrote:
  Hi, hackers
  I was exporting my project to a patch file. As the patch review
  requires, the patch needs to be in context diff format
  (http://wiki.postgresql.org/wiki/Reviewing_a_Patch). But the git diff
  exports in a format similar to unified format. What is everyone doing with
  patching currently? Is there any standard way?
 
 When I want a context diff, I just do:
 
 git diff | filterdiff --format=context
 
 -- 
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
  

FW: [HACKERS] [WIP] Performance Improvement by reducing WAL for Update Operation

2012-08-03 Thread Amit Kapila
One thing I forgot to mention that for tests I have used pg_prewarm utility
to load all the data in shared buffers before start of test.

 

With Regards,

Amit Kapila.

 

From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Amit kapila
Sent: Friday, August 03, 2012 5:17 PM
To: pgsql-hackers@postgresql.org
Subject: [HACKERS] [WIP] Performance Improvement by reducing WAL for Update
Operation

 

Problem statement: 

---

Reducing wal size for an update operation for performance improvement.

Advantages: 

-
1. Observed increase in performance with pgbench when server is
running in sync_commit off mode. 
a. with pgbench (tpc_b) - 13% 
b. with modified pgbench (such that size of modified columns
are less than all row) - 83% 

2. WAL size is reduced 

Design/Impementation: 

--

Currently the change is done only for fixed length columns for simple tables
and the tuple should not contain NULLS. 

This is a Proof of concept, the design and implementation needs to be
changed based on final design required for handling other scenario's

Update operation: 
-
1. Check for the simple table or not.(No toast, No before update triggers) 
2. Works only for not null tuples. 
3. Identify the modified columns from the target entry. 
4. Based on the modified column list, check for any variable length columns
are modified, if so this optimization is not applied. 
5. Identify the offset and length for the modified columns and store it as
an optimized WAL tuple in the following format. 
   Note: Wal update header is modified to denote whether wal update
optimization is done or not. 
WAL update header + Tuple header(no change from previous format) + 
[offset(2bytes)] [length(2 bytes)] [changed data value] 
[offset(2bytes)] [length(2 bytes)] [changed data value] 
   
   

Recovery: 


The following steps are only incase of the tuple is optimized. 

6. For forming the new tuple, old tuple is required.(including if the old
tuple does not require any modifications also). 
7. Form the new tuple based on the format specified in the 5th point. 
8. once new tuple is framed, follow the exisiting behavior. 

Frame the new tuple from old tuple and WAL record: 

1. The length of the data which is needs to be copied from old tuple is
calculated as 
   the difference of offset present in the WAL record and the old tuple
offset. 
   (for the first time, the old tuple offset value is zero) 
2. Once the old tuple data copied, then increase the offset for old tuple by
the 
   copied length. 
3. Get the length and value of modified column from WAL record, copy it into
new tuple. 
4. Increase the old tuple offset with the modified column length. 
5. Repeat this procedure until the WAL record reaches the end. 
6. If any remaining left out old tuple data will be copied. 


Test results: 

--
1. The pgbench test run for 10min. 

2. pgbench result for tpc-b is attached with this mail as pgbench_org

3. modified pgbench(such that size of modified columns are less than all
row)  result for tpc-b is attached with this mail as pgbench_1800_300

Modified pgbench code: 

---
1. Schema of the tables are modified as added some extra fields to increase
the record size to 1800. 
2. The tcp_b benchmark suite to do only update operations. 
3. The update operation changed as to update 3 columns with 300 bytes out of
total size of 1800 bytes. 
4. During initialization of tables removed the NULL value insertions. 


I am working on solution to handle other scenarios like variable length
columns, tuple contain NULLs, handling for before triggers. 

 

Please provide suggestions/objections?

 

With Regards,

Amit Kapila.



Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Andrew Dunstan


On 08/02/2012 11:09 PM, Bruce Momjian wrote:


Thirteen months after Andrew posted this WIP, I have restructured and
tested this code, and it is now ready to replace the pgindent shell
script as pgindent.pl, attached.

I have tested this version by re-running the 9.1 and 9.2 pgindent runs
and comparing the output, and it is just like Andrew said --- it is the
same, except for the two improvements he mentioned.

A Perl version of pgindent has several advantages:

*  more portable;  less dependent on utility command variances
*  able to run on Windows, assuming someone makes entab and
pg_bsd_indent Windows binaries
*  able to fix more limitations of pgindent

I will add documentation about the arguments.

Many thanks to Andrew for his fine work on this.  Any objections?



Thanks for not forgetting this.

I think we generally don't put file type extensions on commands, so this 
should probably just be renamed pgindent. If someone wants to go back to 
the old shell script they can still get it from git.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: SPGiST versus hot standby - question about conflict resolution rules

2012-08-03 Thread Noah Misch
On Thu, Aug 02, 2012 at 02:49:45PM -0400, Tom Lane wrote:
 I believe that the SPGiST issue should be fixed as follows:
 
 * It's okay for transactions inserting redirection tuples to use their
 own XID as the marker.
 
 * In spgvacuum.c, the test in vacuumRedirectAndPlaceholder to decide if
 a redirection tuple is recyclable should use
   TransactionIdPrecedes(dt-xid, RecentGlobalXmin)
 rather than comparing against OldestXmin as it does now.  (This allows
 some consequent simplification since the module need not pass around
 an OldestXmin parameter.)
 
 * vacuumRedirectAndPlaceholder must also compute the newest XID among
 the redirection tuples it removes from the page, and store that in
 a new field of XLOG_SPGIST_VACUUM_REDIRECT WAL records.
 
 * In spgRedoVacuumRedirect, call ResolveRecoveryConflictWithSnapshot
 with the newest-redirect XID.

All the above looks reasonable, as does the committed patch.

There's an obsolete comment in spg_redo().

 I had first thought that we could avoid a change in WAL contents by
 having spgRedoVacuumRedirect compute the cutoff XID for itself by
 examining the removed tuples, but that doesn't work: XLogInsert might
 for instance have decided to store a full-page image, in which case the
 information isn't available by inspection of the old page contents.
 But we still have to enforce the interlock against hot standby xacts.

We achieve that division of labor for XLOG_BTREE_DELETE by examining the old
contents before RestoreBkpBlocks().  This is safe, I think, because we only
examine the page when the system has running hot standby backends, and we only
allow hot standby connections when recovery has proceeded far enough to fix
all torn and ahead-of-EndRecPtr pages.  Note that this decision is a larger
win for XLOG_BTREE_DELETE than it would be for XLOG_SPGIST_VACUUM_REDIRECT.
For XLOG_BTREE_DELETE, finding the cutoff XID could require fetching and
examining hundreds of heap pages.  For XLOG_SPGIST_VACUUM_REDIRECT, doing so
just adds a few cycles to an existing page scan.  I think the simpler way
you've implemented it is better for the long term.

 In principle we should bump the XLOG page magic number for this change
 in WAL contents, but I'm inclined not to because it'd cause pain for
 beta testers, and there are probably very few people who'd have live
 XLOG_SPGIST_VACUUM_REDIRECT records in play when they switch to beta3.

Seems fair.

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP Patch: Use sortedness of CSV foreign tables for query planning

2012-08-03 Thread Robert Haas
On Thu, Aug 2, 2012 at 7:01 AM, Etsuro Fujita
fujita.ets...@lab.ntt.co.jp wrote:
 The following is a comment at fileGetForeignPaths() in contrib/file_fdw.c:

 /*
  * If data file was sorted, and we knew it somehow, we could insert
  * appropriate pathkeys into the ForeignPath node to tell the planner
  * that.
  */

 To do this, I would like to propose new generic options for a file_fdw foreign
 table to specify the sortedness of a data file.  While it is best to allow to
 specify the sortedness on multiple columns, the current interface for the
 generic options dose not seems to be suitable for doing it.  As a compromise, 
 I
 would like to propose single-column sortedness options and insert appropriate
 pathkeys into the ForeignPath node based on these information:

I am not sure it is a good idea to complicate file_fdw with frammishes
of marginal utility.  I guess I tend to view things like file_fdw as a
mechanism for getting the data into the database, not necessarily
something that you actually want to keep your data in permanently and
run complex queries against.  It seems like that's the direction we're
headed in here - statistics, indexing, etc.  I am all in favor of
having some kind of pluggable storage engine as an alternative to our
heap, but I'm not sure a flat-file is a good choice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Docs: Make notes on sequences and rollback more obvious

2012-08-03 Thread Craig Ringer

Hi all

I'm seeing enough questions on pgsql-general and stack overflow to 
suggest that the docs for how sequences interact with transaction 
rollback. Take the most recent post on -general, where the person read 
at least the tutorial, but had no idea about the exemption.


The attached patch:

- Moves the note about nextval() from the footer to be inside the 
nextval description


- Adds an xref from the advanced-transactions tutorial where the poster 
noted their point of confusion, noting the exemption and pointing to the 
docs on nextval.


- A pointer from the docs on SERIAL types to the nextval notes on tx 
rollback.


Comments would be appreciated.

--
Craig Ringer
diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml
index 218988e..423f09e 100644
--- a/doc/src/sgml/advanced.sgml
+++ b/doc/src/sgml/advanced.sgml
@@ -237,6 +237,16 @@ COMMIT;
 commandCOMMIT/, and all our updates so far will be canceled.
/para
 
+   note
+ para
+  A few things in the database are exempt from rollback.  The most
+  important are literalSEQUENCE/s - which are used the counters in
+  literalSERIAL/ columns. See xref linkend=functions-sequence.  Any
+  function or type with special transactional behavior will have an explanatory
+  note in its documentation.
+ /para
+   /note
+
para
 productnamePostgreSQL/ actually treats every SQL statement as being
 executed within a transaction.  If you do not issue a commandBEGIN/
@@ -251,8 +261,8 @@ COMMIT;
 para
  Some client libraries issue commandBEGIN/ and commandCOMMIT/
  commands automatically, so that you might get the effect of transaction
- blocks without asking.  Check the documentation for the interface
- you are using.
+ blocks without asking. Client libraries often call this autocommit.
+ Check the documentation for the interface you are using.
 /para
/note
 
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index afc82a2..cbde801 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -800,7 +800,19 @@ NUMERIC
  typebigserial/type are not true types, but merely
  a notational convenience for creating unique identifier columns
  (similar to the literalAUTO_INCREMENT/literal property
- supported by some other databases). In the current
+ supported by some other databases).
+/para
+
+important
+  para
+Because they use literalSEQUENCE/literals, serial data types are
+	exempt from transactional rollback. This means they can have holes
+or gaps where values are discarded. See literalnexval()/literal in
+	xref linkend=functions-sequence for details.
+  /para
+/important
+
+paraIn the current
  implementation, specifying:
 
 programlisting
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 157de09..0296d3a 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -9820,6 +9820,27 @@ nextval('foo'::text)  lineannotationliteralfoo/literal is looked up at
 execute functionnextval/function concurrently, each will safely receive
 a distinct sequence value.
/para
+
+   important
+para
+ To avoid blocking concurrent transactions that obtain numbers from the
+ same sequence, a functionnextval/function operation is never rolled back;
+ that is, once a value has been fetched it is considered used, even if the
+ transaction that did the functionnextval/function later aborts.  This means
+ that aborted transactions might leave unused quoteholes/quote in the
+ sequence of assigned values.  functionsetval/function operations are never
+ rolled back, either.
+/para
+   /important
+
+   para
+If a sequence object has been created with default parameters,
+successive functionnextval/function calls will return successive values
+beginning with 1.  Other behaviors can be obtained by using
+special parameters in the xref linkend=sql-createsequence command;
+see its command reference page for more information.
+   /para
+
   /listitem
  /varlistentry
 
@@ -9883,31 +9904,17 @@ SELECT setval('foo', 42, false);lineannotationNext functionnextval/ wi
 The result returned by functionsetval/function is just the value of its
 second argument.
/para
+   important
+para
+ Changes to sequences made by setval() are not undone if the transaction
+ rolls back. See the note on nextval().
+/para
+   /important
   /listitem
  /varlistentry
 /variablelist
   /para
 
-  para
-   If a sequence object has been created with default parameters,
-   successive functionnextval/function calls will return successive values
-   beginning with 1.  Other behaviors can be obtained by using
-   special parameters in the xref linkend=sql-createsequence command;
-  

Re: [HACKERS] Word-smithing doc changes

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 12:26:56AM -0400, Bruce Momjian wrote:
 The concurrent index documentation under discussion above was never
 updated, so I took a stab at it, attached.
 
 Greg, I looked at adding a mention of the virtual transaction wait to
 the explicit-locking section as you suggested, and found those were
 all user-visible locking, while this is internal locking.  I did find a
 clear description of transaction id locking in the pg_locks system view
 docs, so I just referenced that.

I found a way to clarify the wording further;  patch attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml
new file mode 100644
index 2ab6470..17b433a
*** a/doc/src/sgml/ref/create_index.sgml
--- b/doc/src/sgml/ref/create_index.sgml
*** CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ]
*** 394,408 
 /para
  
 para
! In a concurrent index build, the index is actually entered into the
! system catalogs in one transaction, then the two table scans occur in a
! second and third transaction.  All active transactions at the time the
! second table scan starts, not just ones that already involve the table,
! have the potential to block the concurrent index creation until they
! finish.  When checking for transactions that could still use the original
! index, concurrent index creation advances through potentially interfering
! older transactions one at a time, obtaining shared locks on their virtual
! transaction identifiers to wait for them to complete.
 /para
  
 para
--- 394,407 
 /para
  
 para
! In a concurrent index build, the index is actually entered into
! the system catalogs in one transaction, then two table scans occur in
! two more transactions.  Any transaction active when the second table
! scan starts can block concurrent index creation until it completes,
! even transactions that only reference the table after the second table
! scan starts.   Concurrent index creation serially waits for each old
! transaction to complete using the method outlined in section xref
! linkend=view-pg-locks.
 /para
  
 para

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 08:26:50AM -0400, Andrew Dunstan wrote:
 
 On 08/02/2012 11:09 PM, Bruce Momjian wrote:
 
 Thirteen months after Andrew posted this WIP, I have restructured and
 tested this code, and it is now ready to replace the pgindent shell
 script as pgindent.pl, attached.
 
 I have tested this version by re-running the 9.1 and 9.2 pgindent runs
 and comparing the output, and it is just like Andrew said --- it is the
 same, except for the two improvements he mentioned.
 
 A Perl version of pgindent has several advantages:
 
 *  more portable;  less dependent on utility command variances
 *  able to run on Windows, assuming someone makes entab and
 pg_bsd_indent Windows binaries
 *  able to fix more limitations of pgindent
 
 I will add documentation about the arguments.
 
 Many thanks to Andrew for his fine work on this.  Any objections?
 
 
 Thanks for not forgetting this.
 
 I think we generally don't put file type extensions on commands, so
 this should probably just be renamed pgindent. If someone wants to
 go back to the old shell script they can still get it from git.

Of course.  I was just noticing that most of the Perl scripts in
/src/tools and src/tools/msvc have a .pl extension on the file name, so
I was following that style.  Is that valid?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump custom format without timestamp?

2012-08-03 Thread Tom Lane
Fabien COELHO coe...@cri.ensmp.fr writes:
 I was looking into using hardlinks to reduce the storage of keeping and 
 syncing periodic database dumps when they are identical. This works fine 
 with the textual format, but not for the custom format because the file 
 header includes a timestamp set by function WriteHead in file 
 src/bin/pg_dump/pg_backup_archiver.c.

I'm not sure about this goal ...

 In order to circumvent this issue, I would think of adding a 
 --no-timestamp option to pg_dump and put zeros everywhere in place of 
 the actual timestamp in such case, and possibly ignoring the said 
 timestamp in function ReadHead.

... and quite dislike this solution.  pg_dump has way too many bizarre
options already.  Perhaps you should consider making a bit of code that
knows how to compare two custom dumps ignoring the timestamp.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Fri, Aug  3, 2012 at 08:26:50AM -0400, Andrew Dunstan wrote:
 I think we generally don't put file type extensions on commands, so
 this should probably just be renamed pgindent. If someone wants to
 go back to the old shell script they can still get it from git.

 Of course.  I was just noticing that most of the Perl scripts in
 /src/tools and src/tools/msvc have a .pl extension on the file name, so
 I was following that style.  Is that valid?

Well, you're replacing the old script, so it has to keep the same name.

IMO adding such an extension to an executable script isn't a terribly
good practice, because it turns what ought to be an implementation
detail into part of the script's API.  Had the shell script been named
pgindent.sh to begin with, we'd now be stuck with the unpalatable
alternatives of changing the name or using an extension that lies
about the implementation language.  I don't much care for putting
in an assumption that the Perl implementation will never be replaced,
either.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Robert Haas
On Fri, Aug 3, 2012 at 10:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Bruce Momjian br...@momjian.us writes:
 On Fri, Aug  3, 2012 at 08:26:50AM -0400, Andrew Dunstan wrote:
 I think we generally don't put file type extensions on commands, so
 this should probably just be renamed pgindent. If someone wants to
 go back to the old shell script they can still get it from git.

 Of course.  I was just noticing that most of the Perl scripts in
 /src/tools and src/tools/msvc have a .pl extension on the file name, so
 I was following that style.  Is that valid?

 Well, you're replacing the old script, so it has to keep the same name.

 IMO adding such an extension to an executable script isn't a terribly
 good practice, because it turns what ought to be an implementation
 detail into part of the script's API.  Had the shell script been named
 pgindent.sh to begin with, we'd now be stuck with the unpalatable
 alternatives of changing the name or using an extension that lies
 about the implementation language.  I don't much care for putting
 in an assumption that the Perl implementation will never be replaced,
 either.

+1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 10:33:21AM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Fri, Aug  3, 2012 at 08:26:50AM -0400, Andrew Dunstan wrote:
  I think we generally don't put file type extensions on commands, so
  this should probably just be renamed pgindent. If someone wants to
  go back to the old shell script they can still get it from git.
 
  Of course.  I was just noticing that most of the Perl scripts in
  /src/tools and src/tools/msvc have a .pl extension on the file name, so
  I was following that style.  Is that valid?
 
 Well, you're replacing the old script, so it has to keep the same name.
 
 IMO adding such an extension to an executable script isn't a terribly
 good practice, because it turns what ought to be an implementation
 detail into part of the script's API.  Had the shell script been named
 pgindent.sh to begin with, we'd now be stuck with the unpalatable
 alternatives of changing the name or using an extension that lies
 about the implementation language.  I don't much care for putting
 in an assumption that the Perl implementation will never be replaced,
 either.

OK, sure, we can keep the pgindent name --- I was just trying to be
consistent.  One problem with the lack of an extension is that there is
no easy way for the Perl cleanup instructions to find all the Perl
executables --- right now it looks for an extension.  Do we have other
Perl scripts in our tree that don't end in *.pl or *.pm?  I didn't find
any with this script:

$ find . -type f -exec file {} \;|grep Perl
./src/backend/catalog/Catalog.pm: Perl5 module source text
./src/tools/msvc/MSBuildProject.pm: Perl5 module source text
./src/tools/msvc/Project.pm: Perl5 module source text
./src/tools/msvc/Mkvcbuild.pm: Perl5 module source text
./src/tools/msvc/Install.pm: Perl5 module source text
./src/tools/msvc/Solution.pm: Perl5 module source text
./src/tools/msvc/VCBuildProject.pm: Perl5 module source text
./src/tools/msvc/VSObjectFactory.pm: Perl5 module source text

We can hard-code pgindent as one we chould perltidy.

FYI, personally, I have never been a big fan of using *.pl for things I
normally run, like scripts in /usr/local/bin, but I sometimes use *.pl
for utility stuff.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CreateLockFile() race condition

2012-08-03 Thread Noah Misch
Despite its efforts, CreateLockFile() does not reliably prevent multiple
closely-timed postmasters from completing startup on the same data directory.
This test procedure evades its defenses:

1. kill -9 the running postmaster for $PGDATA
2. mkdir /tmp/sock1 /tmp/sock2
3. gdb postgres, b unlink, run -k /tmp/sock1.  Hits breakpoint.
4. postgres -k /tmp/sock2
5. In gdb: d 1, c.

The use of two socket directories is probably not essential, but it makes the
test case simpler.

The problem here is a race between concluding the assessment of a PID file as
defunct and unlinking it; during that period, another postmaster may have
replaced the PID file and proceeded.  As far as I've been able to figure, this
flaw is fundamental to any PID file invalidation algorithm relying solely on
atomic filesystem operations like unlink(2), link(2), rename(2) and small
write(2) for mutual exclusion.  Do any of you see a way to remove the race?

I think we should instead implement postmaster mutual exclusion by way of
fcntl(F_SETLK) on Unix and CreateFile(..., FILE_SHARE_READ, ...) on Windows.
PostgreSQL used fcntl(F_SETLK) on its Unix socket a decade ago, and that usage
was removed[1] due to portability problems[2][3].  POSIX does not require it
to work on other than regular files, but I can't find evidence of a Unix-like
platform not supporting it for regular files.  Perl's metaconfig does test it,
with a note about VMS and DOS/DJGPP lacking support.  Gnulib reports only the
lack of Windows support, though the Gnulib test suite does not cover F_SETLK.

CreateLockFile() would have this algorithm:

1. open(postmaster.pid, O_RDWR | O_CREAT, 0600)
2. Request a fcntl write lock on the entire postmaster.pid file.  If this
fails, abandon startup: another postmaster is running.
3. Read the PID from the file.  If the read returns 0 bytes, either we created
the file or the postmaster creating it crashed before writing and syncing any
content.  Such a postmaster would not have reached the point of allocating
shared memory or forking children, so we're safe to proceed in either case.
Any other content problems should never happen and can remain fatal errors.
4. Check any old PID as we do today.  In theory, this should always be
redundant with the fcntl lock.  However, since the PID check code is mature,
let's keep it around indefinitely as a backstop.  Perhaps adjust the error
message to better reflect its can't happen nature, though.
5. Read the shared memory key from the file.  If we find one, probe it as we
do today.  Otherwise, the PID file's creating postmaster crashed before
writing and syncing that value.  Such a postmaster would not have reached the
point of forking children, so we're safe to proceed.
6. ftruncate() the file and write our own content.
7. Set FD_CLOEXEC on the file, leaving it open to retain the lock.

AddToDataDirLockFile() would use the retained file descriptor.  On a related
but independent note, I think the ereport(LOG) calls in that function should
be ereport(FATAL) as they are in CreateLockFile().  We're not significantly
further into the startup process, and failing to add the shared memory key to
the file creates a hazardous situation.

The hazard[4] keeping fcntl locking from replacing the PGSharedMemoryIsInUse()
check does not apply here, because the postmaster itself does not run
arbitrary code that might reopen postmaster.pid.

This change adds some interlock to data directories shared over NFS.  It also
closes the occasionally-reported[5][6] problem that an empty postmaster.pid
blocks cluster startup; we no longer face the possibility that an empty file
is the just-created product of a concurrent postmaster.  It's probably no
longer necessary to fsync postmaster.pid, though I'm disinclined to change
that straightway.

Do any of you see holes in that design or have better ideas?

Thanks,
nm

[1] 
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=792b0f4666b6ea6346aa8d29b568e5d3fe1fcef5
[2] http://archives.postgresql.org/pgsql-hackers/2000-07/msg00359.php
[3] http://archives.postgresql.org/pgsql-hackers/2000-11/msg01306.php
[4] http://archives.postgresql.org/pgsql-hackers/2012-06/msg01598.php
[5] http://archives.postgresql.org/pgsql-hackers/2010-08/msg01094.php
[6] http://archives.postgresql.org/pgsql-hackers/2012-01/msg00233.php

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CreateLockFile() race condition

2012-08-03 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 The problem here is a race between concluding the assessment of a PID file as
 defunct and unlinking it; during that period, another postmaster may have
 replaced the PID file and proceeded.  As far as I've been able to figure, this
 flaw is fundamental to any PID file invalidation algorithm relying solely on
 atomic filesystem operations like unlink(2), link(2), rename(2) and small
 write(2) for mutual exclusion.  Do any of you see a way to remove the race?

Nasty.  Still, the issue only exists for two postmasters launched at
just about exactly the same time, which is an unlikely case.

 I think we should instead implement postmaster mutual exclusion by way of
 fcntl(F_SETLK) on Unix and CreateFile(..., FILE_SHARE_READ, ...) on Windows.

I'm a bit worried about what new problems this solution is going to open
up.  It seems not unlikely that the cure is worse than the disease.
Having locking that actually works on (some) NFS setups would be nice,
but ...

 The hazard[4] keeping fcntl locking from replacing the PGSharedMemoryIsInUse()
 check does not apply here, because the postmaster itself does not run
 arbitrary code that might reopen postmaster.pid.

False.  See shared_preload_libraries.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CreateLockFile() race condition

2012-08-03 Thread Robert Haas
On Fri, Aug 3, 2012 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 I think we should instead implement postmaster mutual exclusion by way of
 fcntl(F_SETLK) on Unix and CreateFile(..., FILE_SHARE_READ, ...) on Windows.

 I'm a bit worried about what new problems this solution is going to open
 up.  It seems not unlikely that the cure is worse than the disease.
 Having locking that actually works on (some) NFS setups would be nice,
 but ...

 The hazard[4] keeping fcntl locking from replacing the 
 PGSharedMemoryIsInUse()
 check does not apply here, because the postmaster itself does not run
 arbitrary code that might reopen postmaster.pid.

 False.  See shared_preload_libraries.

It strikes me that it would be sufficient to hold the fcntl() lock
just long enough to establish one of the other interlocks.  We don't
really need to hold it for the entire lifetime of the postmaster.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Bruce Momjian
On Thu, Jul  7, 2011 at 06:09:54PM -0400, Tom Lane wrote:
 I wrote:
  Peter Eisentraut pete...@gmx.net writes:
  I was adding gcc printf attributes to more functions in obscure places,
  and now I'm seeing this in pg_upgrade:
 
  relfilenode.c:72:2: warning: zero-length gnu_printf format string 
  [-Wformat-zero-length]
 
  Shouldn't it be prep_status(\n)?  If not, why not?
 
 On closer inspection, it appears to me that prep_status should never be
 called with a string containing a newline, period, and the test it
 contains for that case is just brain damage.  The only reason to call it
 at all is to produce a line like
 
   message ..
 
 where something more is expected to be added to the line later.  Calls
 that are meant to produce a complete line could go directly to pg_log.
 
 This in turn implies that transfer_all_new_dbs's use of the function is
 broken and needs to be rethought.

I changed a prep_status() call to pg_log() as you suggested, and
backpatched to 9.2.  Patch attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/relfilenode.c b/contrib/pg_upgrade/relfilenode.c
new file mode 100644
index 7688914..33a867f
*** a/contrib/pg_upgrade/relfilenode.c
--- b/contrib/pg_upgrade/relfilenode.c
*** transfer_all_new_dbs(DbInfoArr *old_db_a
*** 36,42 
  new_dbnum;
  	const char *msg = NULL;
  
! 	prep_status(%s user relation files\n,
  	  user_opts.transfer_mode == TRANSFER_MODE_LINK ? Linking : Copying);
  
  	/* Scan the old cluster databases and transfer their files */
--- 36,42 
  new_dbnum;
  	const char *msg = NULL;
  
! 	pg_log(PG_REPORT, %s user relation files\n,
  	  user_opts.transfer_mode == TRANSFER_MODE_LINK ? Linking : Copying);
  
  	/* Scan the old cluster databases and transfer their files */
diff --git a/contrib/pg_upgrade/util.c b/contrib/pg_upgrade/util.c
new file mode 100644
index 76cd20b..1c71204
*** a/contrib/pg_upgrade/util.c
--- b/contrib/pg_upgrade/util.c
*** pg_log(eLogType type, char *fmt,...)
*** 81,87 
  	if (type != PG_VERBOSE || log_opts.verbose)
  	{
  		fwrite(message, strlen(message), 1, log_opts.internal);
! 		/* if we are using OVERWRITE_MESSAGE, add newline */
  		if (strchr(message, '\r') != NULL)
  			fwrite(\n, 1, 1, log_opts.internal);
  		fflush(log_opts.internal);
--- 81,87 
  	if (type != PG_VERBOSE || log_opts.verbose)
  	{
  		fwrite(message, strlen(message), 1, log_opts.internal);
! 		/* if we are using OVERWRITE_MESSAGE, add newline to log file */
  		if (strchr(message, '\r') != NULL)
  			fwrite(\n, 1, 1, log_opts.internal);
  		fflush(log_opts.internal);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Andrew Dunstan


On 08/03/2012 10:51 AM, Bruce Momjian wrote:



OK, sure, we can keep the pgindent name --- I was just trying to be
consistent.  One problem with the lack of an extension is that there is
no easy way for the Perl cleanup instructions to find all the Perl
executables --- right now it looks for an extension.  Do we have other
Perl scripts in our tree that don't end in *.pl or *.pm?  I didn't find
any with this script:

$ find . -type f -exec file {} \;|grep Perl
./src/backend/catalog/Catalog.pm: Perl5 module source text
./src/tools/msvc/MSBuildProject.pm: Perl5 module source text
./src/tools/msvc/Project.pm: Perl5 module source text
./src/tools/msvc/Mkvcbuild.pm: Perl5 module source text
./src/tools/msvc/Install.pm: Perl5 module source text
./src/tools/msvc/Solution.pm: Perl5 module source text
./src/tools/msvc/VCBuildProject.pm: Perl5 module source text
./src/tools/msvc/VSObjectFactory.pm: Perl5 module source text


Try:

   find . -exec file {} \; | egrep 'perl script|perl -w script|Perl5
   module'


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Alvaro Herrera
Excerpts from Bruce Momjian's message of vie ago 03 12:44:35 -0400 2012:

 I changed a prep_status() call to pg_log() as you suggested, and
 backpatched to 9.2.  Patch attached.

I dunno about this particular patch, but if we ever want to move
pg_upgrade out of contrib we will have to stare hard at it to improve
translatability of the messages it emits.

How ready are we to move it to src/bin/?  Is it sensible to do so in
9.3?

-- 
Álvaro Herrera alvhe...@alvh.no-ip.org

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Word-smithing doc changes

2012-08-03 Thread Alvaro Herrera
Excerpts from Bruce Momjian's message of vie ago 03 09:59:36 -0400 2012:
 On Fri, Aug  3, 2012 at 12:26:56AM -0400, Bruce Momjian wrote:
  The concurrent index documentation under discussion above was never
  updated, so I took a stab at it, attached.
  
  Greg, I looked at adding a mention of the virtual transaction wait to
  the explicit-locking section as you suggested, and found those were
  all user-visible locking, while this is internal locking.  I did find a
  clear description of transaction id locking in the pg_locks system view
  docs, so I just referenced that.
 
 I found a way to clarify the wording further;  patch attached.

Looks sane to me.

Are we backpatching this to 9.1?  I no longer remember if the original
wording is there or just in 9.2.

-- 
Álvaro Herrera alvhe...@alvh.no-ip.org

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: SPGiST versus hot standby - question about conflict resolution rules

2012-08-03 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 On Thu, Aug 02, 2012 at 02:49:45PM -0400, Tom Lane wrote:
 * In spgRedoVacuumRedirect, call ResolveRecoveryConflictWithSnapshot
 with the newest-redirect XID.

 There's an obsolete comment in spg_redo().

[ squint ... ]  Comparing that to btree_redo, I realize that there's a
bug in what I did yesterday: the ResolveRecoveryConflictWithSnapshot
call has to happen before we call RestoreBkpBlocks, else the new state
of the index page could be exposed to processes that need the old one.

Will fix.  I think the code in btree_redo could use a better (or any)
comment about this point, too.

 But we still have to enforce the interlock against hot standby xacts.

 We achieve that division of labor for XLOG_BTREE_DELETE by examining the old
 contents before RestoreBkpBlocks().  This is safe, I think, because we only
 examine the page when the system has running hot standby backends, and we only
 allow hot standby connections when recovery has proceeded far enough to fix
 all torn and ahead-of-EndRecPtr pages.

Egad.  That's seriously underdocumented as well.  And I think it needs
an explicit test that the page is actually older than the current WAL
record, because it would otherwise be doing the wrong thing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 12:51:03PM -0400, Andrew Dunstan wrote:
 
 On 08/03/2012 10:51 AM, Bruce Momjian wrote:
 
 
 OK, sure, we can keep the pgindent name --- I was just trying to be
 consistent.  One problem with the lack of an extension is that there is
 no easy way for the Perl cleanup instructions to find all the Perl
 executables --- right now it looks for an extension.  Do we have other
 Perl scripts in our tree that don't end in *.pl or *.pm?  I didn't find
 any with this script:
 
  $ find . -type f -exec file {} \;|grep Perl
  ./src/backend/catalog/Catalog.pm: Perl5 module source text
  ./src/tools/msvc/MSBuildProject.pm: Perl5 module source text
  ./src/tools/msvc/Project.pm: Perl5 module source text
  ./src/tools/msvc/Mkvcbuild.pm: Perl5 module source text
  ./src/tools/msvc/Install.pm: Perl5 module source text
  ./src/tools/msvc/Solution.pm: Perl5 module source text
  ./src/tools/msvc/VCBuildProject.pm: Perl5 module source text
  ./src/tools/msvc/VSObjectFactory.pm: Perl5 module source text
 
 Try:
 
find . -exec file {} \; | egrep 'perl script|perl -w script|Perl5
module'

OK, I used:

$ find . -type f -exec file {} \;|
egrep -i 'perl.*(script|module)'|grep -v '\.p[lm]:'

and got:

./src/pl/plperl/ppport.h: awk script text
./src/tools/pginclude/pgcheckdefines: a /usr/bin/perl -w script text executable
./src/tools/git_changelog: a /usr/bin/perl script text executable

The last two are Perl scripts without Perl file extensions, so let's
just go with 'pgindent' and I will hard-code those into the perltidy
instructions.


-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Word-smithing doc changes

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 12:55:30PM -0400, Álvaro Herrera wrote:
 Excerpts from Bruce Momjian's message of vie ago 03 09:59:36 -0400 2012:
  On Fri, Aug  3, 2012 at 12:26:56AM -0400, Bruce Momjian wrote:
   The concurrent index documentation under discussion above was never
   updated, so I took a stab at it, attached.
   
   Greg, I looked at adding a mention of the virtual transaction wait to
   the explicit-locking section as you suggested, and found those were
   all user-visible locking, while this is internal locking.  I did find a
   clear description of transaction id locking in the pg_locks system view
   docs, so I just referenced that.
  
  I found a way to clarify the wording further;  patch attached.
 
 Looks sane to me.
 
 Are we backpatching this to 9.1?  I no longer remember if the original
 wording is there or just in 9.2.

I wasn't planning to, but will do as you suggest for 9.1.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 12:53:22PM -0400, Álvaro Herrera wrote:
 Excerpts from Bruce Momjian's message of vie ago 03 12:44:35 -0400 2012:
 
  I changed a prep_status() call to pg_log() as you suggested, and
  backpatched to 9.2.  Patch attached.
 
 I dunno about this particular patch, but if we ever want to move
 pg_upgrade out of contrib we will have to stare hard at it to improve
 translatability of the messages it emits.

Agreed.

 How ready are we to move it to src/bin/?  Is it sensible to do so in
 9.3?

We have talked about that.  Several people felt the instructions for
using pg_upgrade was already too complex and that it isn't a
general-user utility like pg_dump.  Not sure if that is still the
general feeling.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CreateLockFile() race condition

2012-08-03 Thread Noah Misch
On Fri, Aug 03, 2012 at 11:59:00AM -0400, Tom Lane wrote:
 Noah Misch n...@leadboat.com writes:
  I think we should instead implement postmaster mutual exclusion by way of
  fcntl(F_SETLK) on Unix and CreateFile(..., FILE_SHARE_READ, ...) on Windows.
 
 I'm a bit worried about what new problems this solution is going to open
 up.  It seems not unlikely that the cure is worse than the disease.

That's a fair concern.  There's only so much we'll know in advance.

 Having locking that actually works on (some) NFS setups would be nice,
 but ...
 
  The hazard[4] keeping fcntl locking from replacing the 
  PGSharedMemoryIsInUse()
  check does not apply here, because the postmaster itself does not run
  arbitrary code that might reopen postmaster.pid.
 
 False.  See shared_preload_libraries.

Quite right.  Even so, that code has a special role and narrower goals to
which it can reasonable aspire, giving credibility to ignoring the problem or
documenting the problem away.  (I don't see that we document any of the other
constraints on _PG_init of libraries named in shared_preload_libraries.)

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: SPGiST versus hot standby - question about conflict resolution rules

2012-08-03 Thread Tom Lane
I wrote:
 Noah Misch n...@leadboat.com writes:
 We achieve that division of labor for XLOG_BTREE_DELETE by examining the old
 contents before RestoreBkpBlocks().  This is safe, I think, because we only
 examine the page when the system has running hot standby backends, and we 
 only
 allow hot standby connections when recovery has proceeded far enough to fix
 all torn and ahead-of-EndRecPtr pages.

 Egad.  That's seriously underdocumented as well.

And buggy, now that I look at it.  Suppose a new backend starts
immediately after CountDBBackends runs?  That's okay, because it can't
possibly have seen any page links that we would need to conflict for
... but because the code is lazy and returns InvalidTransactionId,
we'll throw a conflict on it anyway.

The case where the loop doesn't find any live tuples seems problematic
as well.  There's a comment asserting that this will happen very
rarely, which is neither comforting nor backed up by any evidence or
argument.  ISTM that a page that VACUUM has removed tuples from might
very well have a preimage consisting only of dead tuples.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Andrew Dunstan


On 08/03/2012 01:23 PM, Bruce Momjian wrote:

On Fri, Aug  3, 2012 at 12:51:03PM -0400, Andrew Dunstan wrote:

On 08/03/2012 10:51 AM, Bruce Momjian wrote:



OK, sure, we can keep the pgindent name --- I was just trying to be
consistent.  One problem with the lack of an extension is that there is
no easy way for the Perl cleanup instructions to find all the Perl
executables --- right now it looks for an extension.  Do we have other
Perl scripts in our tree that don't end in *.pl or *.pm?  I didn't find
any with this script:

$ find . -type f -exec file {} \;|grep Perl
./src/backend/catalog/Catalog.pm: Perl5 module source text
./src/tools/msvc/MSBuildProject.pm: Perl5 module source text
./src/tools/msvc/Project.pm: Perl5 module source text
./src/tools/msvc/Mkvcbuild.pm: Perl5 module source text
./src/tools/msvc/Install.pm: Perl5 module source text
./src/tools/msvc/Solution.pm: Perl5 module source text
./src/tools/msvc/VCBuildProject.pm: Perl5 module source text
./src/tools/msvc/VSObjectFactory.pm: Perl5 module source text

Try:

find . -exec file {} \; | egrep 'perl script|perl -w script|Perl5
module'

OK, I used:

$ find . -type f -exec file {} \;|
egrep -i 'perl.*(script|module)'|grep -v '\.p[lm]:'

and got:

./src/pl/plperl/ppport.h: awk script text
./src/tools/pginclude/pgcheckdefines: a /usr/bin/perl -w script text executable
./src/tools/git_changelog: a /usr/bin/perl script text executable

The last two are Perl scripts without Perl file extensions, so let's
just go with 'pgindent' and I will hard-code those into the perltidy
instructions.





Your pattern has produced a false positive, though. Wouldn't it be 
better not to hardcode anything?


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump custom format without timestamp?

2012-08-03 Thread Fabien COELHO


Dear Tom,

thanks for your answer,


Fabien COELHO coe...@cri.ensmp.fr writes:

I was looking into using hardlinks to reduce the storage of keeping and
syncing periodic database dumps when they are identical. This works fine
with the textual format, but not for the custom format because the file
header includes a timestamp set by function WriteHead in file
src/bin/pg_dump/pg_backup_archiver.c.


I'm not sure about this goal ...


That may be debatable. I just want easy longterm dumps with rotations on 
small databases, and I can do that in a few line of shell using links, 
something like:


 # on every hour
 pg_dump some options base  $current
 # is it identical to the previous one?
 cmp $current $previous  current=$previous
 ln $current $(date +H%H) # H00 .. H23 / hourly, daily rotation
 ln $current $(date +%a)  # Mon .. Sun / daily, weekly rotation
 ln $current $(date +W%D) # W01 .. W53 / weekly, yearly rotation
 ln $current $(date +%b)  # Jan .. Dec / monthly, yearly rotation
 ln $current $(date +Y%Y) # Y2012 .. Y20XX / yearly, no rotation
 mv $current $previous


In order to circumvent this issue, I would think of adding a
--no-timestamp option to pg_dump and put zeros everywhere in place of
the actual timestamp in such case, and possibly ignoring the said
timestamp in function ReadHead.


... and quite dislike this solution.


I agree that it is a little bit ugly. I'm not sure that it was a good idea 
add a timestamp in the dump format. From a system perspective, the file is 
already timestamped when created, so this somehow is redundant. Well, one 
may lost the timestamps.


pg_dump has way too many bizarre options already.  Perhaps you should 
consider making a bit of code that knows how to compare two custom dumps 
ignoring the timestamp.


I could do that, but I like a simple cmp in a simple shell script, 
rather than a custom comparison command. The backup is really to do a cmp 
-i XX to blindly skip part of the header.


--
Fabien.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP pgindent replacement

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 02:08:34PM -0400, Andrew Dunstan wrote:
 $ find . -type f -exec file {} \;|
  egrep -i 'perl.*(script|module)'|grep -v '\.p[lm]:'
 
 and got:
 
 ./src/pl/plperl/ppport.h: awk script text
 ./src/tools/pginclude/pgcheckdefines: a /usr/bin/perl -w script text 
 executable
 ./src/tools/git_changelog: a /usr/bin/perl script text executable
 
 The last two are Perl scripts without Perl file extensions, so let's
 just go with 'pgindent' and I will hard-code those into the perltidy
 instructions.
 
 
 
 
 Your pattern has produced a false positive, though. Wouldn't it be
 better not to hardcode anything?

You mean use an actual 'grep' to find the Perl programs --- I can do
that.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add temp_file_limit GUC parameter to constrain temporary file sp

2012-08-03 Thread Bruce Momjian
On Tue, Jul 19, 2011 at 10:29:45AM +1200, Mark Kirkwood wrote:
 On 19/07/11 02:36, Tom Lane wrote:
 Bruce Momjianbr...@momjian.us  writes:
 Tom Lane wrote:
 Huh?  Seems like a waste of text to me.  What else would happen?
 Well, if we exceed work_mem, we spill to temp disk.  If we exceed temp
 disk, we error out.  Those different behaviors don't seem obvious to me.
 [ shrug... ]  Feel free to change it.
 
 
 
 No objections from me - can't see any harm in making it very clear
 what happens when the limit is exceeded :-)

Documentation patch attached and applied, and backpatched to 9.2.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
new file mode 100644
index 074afee..216c095
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
*** SET ENABLE_SEQSCAN TO OFF;
*** 1138,1144 
 para
  Specifies the maximum amount of disk space that a session can use
  for temporary files, such as sort and hash temporary files, or the
! storage file for a held cursor.
  The value is specified in kilobytes, and literal-1/ (the
  default) means no limit.
  Only superusers can change this setting.
--- 1138,1145 
 para
  Specifies the maximum amount of disk space that a session can use
  for temporary files, such as sort and hash temporary files, or the
! storage file for a held cursor.  A transaction attempting to exceed
! this limit will be cancelled.
  The value is specified in kilobytes, and literal-1/ (the
  default) means no limit.
  Only superusers can change this setting.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 03:10:24PM -0400, Alvaro Herrera wrote:
 Excerpts from Bruce Momjian's message of vie ago 03 13:32:21 -0400 2012:
  On Fri, Aug  3, 2012 at 12:53:22PM -0400, Álvaro Herrera wrote:
 
   How ready are we to move it to src/bin/?  Is it sensible to do so in
   9.3?
  
  We have talked about that.  Several people felt the instructions for
  using pg_upgrade was already too complex and that it isn't a
  general-user utility like pg_dump.  Not sure if that is still the
  general feeling.
 
 I don't disagree with pg_upgrade being operationally complex, but I
 don't see how this relates to contrib vs. non-contrib at all.  Are we
 supposed to only have simple programs in src/bin?  That seems a
 strange policy.

Well, perhaps we need to re-open the discussion then.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Alvaro Herrera
Excerpts from Bruce Momjian's message of vie ago 03 13:32:21 -0400 2012:
 On Fri, Aug  3, 2012 at 12:53:22PM -0400, Álvaro Herrera wrote:

  How ready are we to move it to src/bin/?  Is it sensible to do so in
  9.3?
 
 We have talked about that.  Several people felt the instructions for
 using pg_upgrade was already too complex and that it isn't a
 general-user utility like pg_dump.  Not sure if that is still the
 general feeling.

I don't disagree with pg_upgrade being operationally complex, but I
don't see how this relates to contrib vs. non-contrib at all.  Are we
supposed to only have simple programs in src/bin?  That seems a
strange policy.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/   

   
PostgreSQL Development, 24x7 Support, Training  Services   

   

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Robert Haas
On Fri, Aug 3, 2012 at 3:22 PM, Bruce Momjian br...@momjian.us wrote:
 I don't disagree with pg_upgrade being operationally complex, but I
 don't see how this relates to contrib vs. non-contrib at all.  Are we
 supposed to only have simple programs in src/bin?  That seems a
 strange policy.

 Well, perhaps we need to re-open the discussion then.

I feel like putting it in src/bin would carry an implication of
robustness that I'm not sanguine about.  Granted, putting it in
contrib has already pushed the envelope in that direction further than
is perhaps warranted.  But ISTM that if we ever want to put this in
src/bin someone needs to devote some serious engineering time to
filing down the rough edges.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2012-08-03 Thread Bruce Momjian
On Fri, Aug  3, 2012 at 04:01:18PM -0400, Robert Haas wrote:
 On Fri, Aug 3, 2012 at 3:22 PM, Bruce Momjian br...@momjian.us wrote:
  I don't disagree with pg_upgrade being operationally complex, but I
  don't see how this relates to contrib vs. non-contrib at all.  Are we
  supposed to only have simple programs in src/bin?  That seems a
  strange policy.
 
  Well, perhaps we need to re-open the discussion then.
 
 I feel like putting it in src/bin would carry an implication of
 robustness that I'm not sanguine about.  Granted, putting it in
 contrib has already pushed the envelope in that direction further than
 is perhaps warranted.  But ISTM that if we ever want to put this in
 src/bin someone needs to devote some serious engineering time to
 filing down the rough edges.

I don't know how to file down any of the existing rough edges.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] Performance Improvement by reducing WAL for Update Operation

2012-08-03 Thread Heikki Linnakangas

On 03.08.2012 14:46, Amit kapila wrote:

Currently the change is done only for fixed length columns for simple tables 
and the tuple should not contain NULLS.

This is a Proof of concept, the design and implementation needs to be changed 
based on final design required for handling other scenario's

Update operation:
-
1. Check for the simple table or not.(No toast, No before update triggers)
2. Works only for not null tuples.
3. Identify the modified columns from the target entry.
4. Based on the modified column list, check for any variable length columns are 
modified, if so this optimization is not applied.
5. Identify the offset and length for the modified columns and store it as an 
optimized WAL tuple in the following format.
Note: Wal update header is modified to denote whether wal update 
optimization is done or not.
 WAL update header + Tuple header(no change from previous format) +
 [offset(2bytes)] [length(2 bytes)] [changed data value]
 [offset(2bytes)] [length(2 bytes)] [changed data value]
   
   


The performance will need to be re-verified after you fix these 
limitations. Those limitations need to be fixed before this can be applied.


It would be nice to use some well-known binary delta algorithm for this, 
rather than invent our own. OTOH, we have more knowledge of the 
attribute boundaries, so a custom algorithm might work better. In any 
case, I'd like to see the code to do the delta encoding/decoding to be 
put into separate functions, outside of heapam.c. It would be good for 
readability, and we might want to reuse this in other places too.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Docs: Make notes on sequences and rollback more obvious

2012-08-03 Thread Kevin Grittner
Craig Ringer ring...@ringerc.id.au wrote:
 
 I'm seeing enough questions on pgsql-general and stack overflow
 to suggest that the docs for how sequences interact with
 transaction rollback.
 
Yeah, I've noticed a surprising number of people who are being
surprised by the non-transactional nature of sequences (and serial
columns) in spite of current caveats in the docs; so I agree that we
should punch that up in the docs a bit.
 
 The attached patch:
 
 - Moves the note about nextval() from the footer to be inside the 
 nextval description
 
 - Adds an xref from the advanced-transactions tutorial where the
 poster noted their point of confusion, noting the exemption and
 pointing to the docs on nextval.
 
 - A pointer from the docs on SERIAL types to the nextval notes on
 tx rollback.
 
 Comments would be appreciated.
 
I haven't reviewed it in detail but noticed an apparent editing
error: which are used the counters should probably have an as
thrown in there.  Or something.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Issue in Behavior of Interval Datatype

2012-08-03 Thread Tom Lane
Amit Kapila amit.kap...@huawei.com writes:
 select (interval '56:48'  minute to second);
 result$B!'(B00:56:48
 select (interval '-56:48'  minute to second);
 result$B!'(B-56:48:00
 select (interval '+56:48'  minute to second);
 result$B!'(B56:48:00

 I have checked the code and found that in function DecodeInterval(), for
 timezone case (DTK_TZ) it uses INTERVAL_FULL_RANGE irrespective of range
 passed by user.

 However if use the range passed as argument in function DecodeInterval(),
 the result of using $B!F(B+$B!G(B or $B!F(B-$B!F(B is same as without 
 using it.

 Is there any particular reason for ignoring the range for DTK_TZ case in
 DecodeInterval() function?

I think you are right; this if-block should be exactly like the DTK_TIME
case except for handling the prepended sign.  That also raises the
question why it is changing the tmask value returned by DecodeTime.
It seems to be doing exactly the wrong thing there.  Test case:

regression=# select (interval '56:48 56:48'  );
ERROR:  invalid input syntax for type interval: 56:48 56:48
LINE 1: select (interval '56:48 56:48'  );
 ^
regression=# select (interval '56:48 +56:48'  );
 interval 
--
 56:48:00
(1 row)

The second one fails to fail because an inappropriate tmask value got
included into fmask.

Will fix.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Docs: Make notes on sequences and rollback more obvious

2012-08-03 Thread Craig Ringer

On 08/04/2012 04:12 AM, Kevin Grittner wrote:

I haven't reviewed it in detail but noticed an apparent editing error: 
which are used the counters should probably have an as thrown in 
there. Or something.


Thanks. Editing fail. I revised that spot repeatedly to try to keep it 
short and simple without in any way implying that SEQUENCEs are *only* 
used for SERIAL columns.


Fixed attached.
diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml
index 218988e..75b1bd4 100644
--- a/doc/src/sgml/advanced.sgml
+++ b/doc/src/sgml/advanced.sgml
@@ -237,6 +237,16 @@ COMMIT;
 commandCOMMIT/, and all our updates so far will be canceled.
/para
 
+   note
+ para
+  A few things in the database are exempt from rollback.  The most
+  important are literalSEQUENCE/s - which are used by the counters in
+  literalSERIAL/ columns. See xref linkend=functions-sequence.  Any
+  function or type with special transactional behavior will have an explanatory
+  note in its documentation.
+ /para
+   /note
+
para
 productnamePostgreSQL/ actually treats every SQL statement as being
 executed within a transaction.  If you do not issue a commandBEGIN/
@@ -251,8 +261,8 @@ COMMIT;
 para
  Some client libraries issue commandBEGIN/ and commandCOMMIT/
  commands automatically, so that you might get the effect of transaction
- blocks without asking.  Check the documentation for the interface
- you are using.
+ blocks without asking. Client libraries often call this autocommit.
+ Check the documentation for the interface you are using.
 /para
/note
 
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index afc82a2..cbde801 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -800,7 +800,19 @@ NUMERIC
  typebigserial/type are not true types, but merely
  a notational convenience for creating unique identifier columns
  (similar to the literalAUTO_INCREMENT/literal property
- supported by some other databases). In the current
+ supported by some other databases).
+/para
+
+important
+  para
+Because they use literalSEQUENCE/literals, serial data types are
+	exempt from transactional rollback. This means they can have holes
+or gaps where values are discarded. See literalnexval()/literal in
+	xref linkend=functions-sequence for details.
+  /para
+/important
+
+paraIn the current
  implementation, specifying:
 
 programlisting
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 157de09..0296d3a 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -9820,6 +9820,27 @@ nextval('foo'::text)  lineannotationliteralfoo/literal is looked up at
 execute functionnextval/function concurrently, each will safely receive
 a distinct sequence value.
/para
+
+   important
+para
+ To avoid blocking concurrent transactions that obtain numbers from the
+ same sequence, a functionnextval/function operation is never rolled back;
+ that is, once a value has been fetched it is considered used, even if the
+ transaction that did the functionnextval/function later aborts.  This means
+ that aborted transactions might leave unused quoteholes/quote in the
+ sequence of assigned values.  functionsetval/function operations are never
+ rolled back, either.
+/para
+   /important
+
+   para
+If a sequence object has been created with default parameters,
+successive functionnextval/function calls will return successive values
+beginning with 1.  Other behaviors can be obtained by using
+special parameters in the xref linkend=sql-createsequence command;
+see its command reference page for more information.
+   /para
+
   /listitem
  /varlistentry
 
@@ -9883,31 +9904,17 @@ SELECT setval('foo', 42, false);lineannotationNext functionnextval/ wi
 The result returned by functionsetval/function is just the value of its
 second argument.
/para
+   important
+para
+ Changes to sequences made by setval() are not undone if the transaction
+ rolls back. See the note on nextval().
+/para
+   /important
   /listitem
  /varlistentry
 /variablelist
   /para
 
-  para
-   If a sequence object has been created with default parameters,
-   successive functionnextval/function calls will return successive values
-   beginning with 1.  Other behaviors can be obtained by using
-   special parameters in the xref linkend=sql-createsequence command;
-   see its command reference page for more information.
-  /para
-
-  important
-   para
-To avoid blocking concurrent transactions that obtain numbers from the
-same sequence, a functionnextval/function operation is never rolled back;
-that is, once a value has been