Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-08-17 Thread Fabrízio de Royes Mello
2012/8/16 Dickson S. Guedes lis...@guedesoft.net

 2012/8/16 Fabrízio de Royes Mello fabriziome...@gmail.com:
  The attached patch implement this feature:
 
  CREATE SCHEMA [ IF NOT EXISTS ] schema_name [ AUTHORIZATION user_name ] [
  schema_element [ ... ] ]
  CREATE SCHEMA [ IF NOT EXISTS ] AUTHORIZATION user_name [ schema_element
 [
  ... ] ]
 
  Now, PostgreSQL don't trow an error if we use IF NOT EXISTS in CREATE
  SCHEMA statement.

 I started testing this, but I didn't see regression tests for it.
 Could you write them?.


The attached patch contains regression tests for it.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


create_schema_if_not_exists_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-08-17 Thread Fabrízio de Royes Mello
2012/8/17 Fabrízio de Royes Mello fabriziome...@gmail.com


 I started testing this, but I didn't see regression tests for it.
 Could you write them?.


 The attached patch contains regression tests for it.


Please, don't consider de last patch (v2) because I make a little mistake
on create_schema.sgml structure. The attached patch fix that.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


create_schema_if_not_exists_v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-08-17 Thread Alvaro Herrera
Excerpts from Fabrízio de Royes Mello's message of vie ago 17 09:16:30 -0400 
2012:

 The attached patch contains regression tests for it.

I think you're missing support in copyfuncs.c and equalfuncs.c for the
new field in the node.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-08-17 Thread Fabrízio de Royes Mello
2012/8/17 Alvaro Herrera alvhe...@2ndquadrant.com

 Excerpts from Fabrízio de Royes Mello's message of vie ago 17 09:16:30
 -0400 2012:

  The attached patch contains regression tests for it.

 I think you're missing support in copyfuncs.c and equalfuncs.c for the
 new field in the node.


You're completely right, thanks...

The attached patch add support for the new field in the node in
copyfuncs.c and equalfuncs.c.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


create_schema_if_not_exists_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-08-17 Thread Dickson S. Guedes
2012/8/17 Fabrízio de Royes Mello fabriziome...@gmail.com:

 2012/8/17 Alvaro Herrera alvhe...@2ndquadrant.com

 Excerpts from Fabrízio de Royes Mello's message of vie ago 17 09:16:30
 -0400 2012:

  The attached patch contains regression tests for it.

 I think you're missing support in copyfuncs.c and equalfuncs.c for the
 new field in the node.

 You're completely right, thanks...

 The attached patch add support for the new field in the node in
 copyfuncs.c and equalfuncs.c.

Maybe I'm missing something but shouldn't it being handled in extension.c too?

[]s
-- 
Dickson S. Guedes
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://guedesoft.net - http://www.postgresql.org.br


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] SERIAL columns in foreign tables

2012-08-17 Thread Alvaro Herrera
Hi,

While working on the NOT NULL constraint stuff, I noticed that if you're
creating a foreign table with a SERIAL column, it bleats like this:

CREATE FOREIGN TABLE ft1 (c1 serial) SERVER sc;
ERROR:  default values on foreign tables are not supported

Note that it's talking about a default value, which the user never
explicitely defined.  So it's obviously about the internally defined
default value for nextval.  Now, the reason I noticed is because since
the column is also marked NOT NULL, and constraints are also disallowed
in foreign tables (and apparently checked earlier than defaults), then
the error message is changed:

*** /pgsql/source/notnull-constraint/src/test/regress/expected/foreign_data.out 
2012-08-14 18:04:51.589535046 -0400
--- 
/home/alvherre/Code/pgsql/build/notnull-constraint/src/test/regress/results/foreign_data.out
2012-08-16 17:49:17.208314737 -0400
***
*** 665,671 
  CREATE FOREIGN TABLE ft1 () SERVER no_server;   -- ERROR
  ERROR:  server no_server does not exist
  CREATE FOREIGN TABLE ft1 (c1 serial) SERVER sc; -- ERROR
! ERROR:  default values on foreign tables are not supported
  CREATE FOREIGN TABLE ft1 () SERVER s0 WITH OIDS;-- ERROR
  ERROR:  syntax error at or near WITH OIDS
  LINE 1: CREATE FOREIGN TABLE ft1 () SERVER s0 WITH OIDS;
--- 665,671 
  CREATE FOREIGN TABLE ft1 () SERVER no_server;   -- ERROR
  ERROR:  server no_server does not exist
  CREATE FOREIGN TABLE ft1 (c1 serial) SERVER sc; -- ERROR
! ERROR:  constraints on foreign tables are not supported
  CREATE FOREIGN TABLE ft1 () SERVER s0 WITH OIDS;-- ERROR
  ERROR:  syntax error at or near WITH OIDS
  LINE 1: CREATE FOREIGN TABLE ft1 () SERVER s0 WITH OIDS;


I guess one very simple solution to this problem would be to just adjust
the expected output for foreign tables.

However I'm wondering if it'd be better to tweak the code to explicitely
check for SERIAL/BIGSERIAL instead of letting it error out in internal
conditions.  The way it currently is, it seems a bit user-unfriendly to
me.

Thoughts?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql output locations

2012-08-17 Thread Bruce Momjian
On Wed, Dec 14, 2011 at 10:57:25AM -0500, Robert Haas wrote:
 On Wed, Dec 14, 2011 at 4:45 AM, Magnus Hagander mag...@hagander.net wrote:
  * There are a number of things that are always written to stdout, that
  there is no way to redirect. In some cases it's interactive prompts -
  makes sense - but also for example the output of \timing goes to
  stdout always. Is there some specific logic behind what/when this
  should be done?
 
  Everything that is not an error goes to stdout, no?  Except the query
  output, if you change it.
 
  Maybe the way to do what you want is to invent a new setting that
  temporarily changes stdout.
 
  Yeah, that might be it. Or I need separate settings for put errors in
  the query output stream and put non-query-output-but-also-non-errors
  in the query output stream. The effect would be the same, I guess...
 
 That seems an awful lot harder (and messier) than just changing the
 all the call sites to use the same error-reporting function.

I have done as you suggested with the attached patch.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
new file mode 100644
index 205bb50..dc04399
*** a/src/bin/psql/command.c
--- b/src/bin/psql/command.c
*** HandleSlashCmds(PsqlScanState scan_state
*** 110,116 
  	if (status == PSQL_CMD_UNKNOWN)
  	{
  		if (pset.cur_cmd_interactive)
! 			fprintf(stderr, _(Invalid command \\%s. Try \\? for help.\n), cmd);
  		else
  			psql_error(invalid command \\%s\n, cmd);
  		status = PSQL_CMD_ERROR;
--- 110,116 
  	if (status == PSQL_CMD_UNKNOWN)
  	{
  		if (pset.cur_cmd_interactive)
! 			psql_error(Invalid command \\%s. Try \\? for help.\n, cmd);
  		else
  			psql_error(invalid command \\%s\n, cmd);
  		status = PSQL_CMD_ERROR;
*** exec_command(const char *cmd,
*** 904,910 
  
  		if (strcmp(pw1, pw2) != 0)
  		{
! 			fprintf(stderr, _(Passwords didn't match.\n));
  			success = false;
  		}
  		else
--- 904,910 
  
  		if (strcmp(pw1, pw2) != 0)
  		{
! 			psql_error(Passwords didn't match.\n);
  			success = false;
  		}
  		else
*** exec_command(const char *cmd,
*** 922,928 
  
  			if (!encrypted_password)
  			{
! fprintf(stderr, _(Password encryption failed.\n));
  success = false;
  			}
  			else
--- 922,928 
  
  			if (!encrypted_password)
  			{
! psql_error(Password encryption failed.\n);
  success = false;
  			}
  			else
*** exec_command(const char *cmd,
*** 1441,1447 
  		while ((value = psql_scan_slash_option(scan_state,
  			   OT_NORMAL, NULL, true)))
  		{
! 			fprintf(stderr, + opt(%d) = |%s|\n, i++, value);
  			free(value);
  		}
  	}
--- 1441,1447 
  		while ((value = psql_scan_slash_option(scan_state,
  			   OT_NORMAL, NULL, true)))
  		{
! 			psql_error(+ opt(%d) = |%s|\n, i++, value);
  			free(value);
  		}
  	}
*** do_connect(char *dbname, char *user, cha
*** 1519,1525 
  		 *	to connect to the wrong database by using defaults, so require
  		 *	all parameters to be specified.
  		 */
! 		fputs(_(All connection parameters must be supplied because no database connection exists\n), stderr);
  		return false;
  	}
  
--- 1519,1526 
  		 *	to connect to the wrong database by using defaults, so require
  		 *	all parameters to be specified.
  		 */
! 		psql_error(All connection parameters must be supplied because no 
!    database connection exists\n);
  		return false;
  	}
  
*** do_connect(char *dbname, char *user, cha
*** 1608,1614 
  
  			/* pset.db is left unmodified */
  			if (o_conn)
! fputs(_(Previous connection kept\n), stderr);
  		}
  		else
  		{
--- 1609,1615 
  
  			/* pset.db is left unmodified */
  			if (o_conn)
! psql_error(Previous connection kept\n);
  		}
  		else
  		{
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
new file mode 100644
index 330d5ce..f5bd0f6
*** a/src/bin/psql/common.c
--- b/src/bin/psql/common.c
*** pg_strdup(const char *string)
*** 42,48 
  
  	if (!string)
  	{
! 		fprintf(stderr, _(%s: pg_strdup: cannot duplicate null pointer (internal error)\n),
  pset.progname);
  		exit(EXIT_FAILURE);
  	}
--- 42,48 
  
  	if (!string)
  	{
! 		psql_error(%s: pg_strdup: cannot duplicate null pointer (internal error)\n,
  pset.progname);
  		exit(EXIT_FAILURE);
  	}
*** psql_error(const char *fmt,...)
*** 161,167 
  	va_list		ap;
  
  	fflush(stdout);
! 	if (pset.queryFout != stdout)
  		fflush(pset.queryFout);
  
  	if (pset.inputfile)
--- 161,167 
  	va_list		ap;
  
  	fflush(stdout);
! 	if (pset.queryFout  pset.queryFout != stdout)
  		fflush(pset.queryFout);
  
  	if (pset.inputfile)
*** static PGcancel *volatile cancelConn = N
*** 219,224 
--- 219,225 
  

Re: [HACKERS] Planner avoidance of index only scans for partial indexes

2012-08-17 Thread Merlin Moncure
On Thu, Aug 16, 2012 at 5:20 PM, Josh Berkus j...@agliodbs.com wrote:
 I'm mainly looking for numbers for the 9.2 release.  Like, up to 14X
 speedup on data warehousing queries.

OK:
I work in the multi family industry and we have a large database which
we are organizing into an on-demand OLAP style data delivery project.
The data is organized into a fact table like this:
(PropertyId, Floorplan, UnitType, TimeSeries, AggregationTypeCode,
MetricCode, MetricValue)

Where metric value contains a single numeric value for the calculated
metric (say, average rent).  The other fields point at the identifying
criteria for the metric: property it pertains to, etc.  TimeSeries
represents a point in time: It's a string that is 'Y2012M01',
'Y2011Q3', etc.

The table is partitioned on a two year basis.  The 2010_2011 partition
has 37million records and is only expected to grow as we add new
properties and metrics.  One of the important questions this table has
to answer is to gather metric values like this:

SELECT PropertyId, AGG(MetricValue) FROM PrepertyMetric
WHERE Floorplan = w and UnitType = x and TimeSeries = y and MetricCode = z;

And that query might return 7k-15k records depending. It is extremely
important to be fast and cache warm-up is a huge issue for us as we
have various nightly processes that blow the cache out.  The main
interface to the function is a routine called GetChartData which runs
the query above N times looking back in time from a known point
(typically 12 times) and returns arrays which get converted to json
when going out the door.

On the 9.2 database I organized one of the partitions (only) to
utilize IOS and started capturing timings when I came in this morning
with a completely cold cache.  Partition tables have been chilled as
part of the nightly build.

hese are 100% real world results.  output data is 3 parallel arrays of size 12.

9.1 Run 1 (Cold), 12M partition
Time: 5147.000 ms

9.1 Run 2 (Warm), 12M partition
Time: 219.000 ms

9.2 Run 1 (Cold) 12M partition, Index Only scan
Time: 257.000 ms

9.2 Run 2 (Warm) 2M partition, Index Only scan
Time: 92.000 ms

9.1 Run 1 (Cold) 37M partition
Time: 22074.000 ms

9.1 Run 2 (Warm) 37M partition
Time: 435.000 ms

9.2 Run 1 (Cold) 37M partition (not IOS!)
Time: 7629.000 ms

9.2 Run 2 (Warm) 37M partition (not IOS!)
Time: 183.000 ms

The takeaway is that for this query I'm seeing end to end query
execution improvement of 58% in the worst case (warm cache) and 20x or
more in the best case when the cache is cold: that 22 sec time is
unpredictable and can run into the minutes as the index randomizes the
i/o to the heap (the table is not clustered on this index nor can it
be).   This 'best case' is extremely important to us as the generally
the first thing the user sees when firing up the application the first
time and with 9.2 it's a 'night and day' experience.  Here is the
bottom line in 9.1 vs 9.2 IOS:

9.1:
ysanalysis=# explain (buffers, analyze) SELECT ...
   QUERY PLAN

 Result  (cost=0.00..0.26 rows=1 width=0) (actual
time=219.979..219.979 rows=1 loops=1)
   Buffers: shared hit=176227
 Total runtime: 219.990 ms

9.2:
ysanalysis=# explain (buffers, analyze) SELECT ...
  QUERY PLAN
--
 Result  (cost=0.00..0.26 rows=1 width=0) (actual time=92.909..92.909
rows=1 loops=1)
   Buffers: shared hit=27198
 Total runtime: 92.921 ms

That's going to translate into less cache pressure and overall cpu
efficiency.  It's a also a good technique to manage i/o patterns
especially for those of us that don't have the luxury of an all-SSD
storage unit (my backend is a
generally-good-but-you-never-know-performance enterprise SAN).  Good
knowledge of the database internals is helpful and precise indexing is
a must.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql output locations

2012-08-17 Thread Alvaro Herrera
Excerpts from Bruce Momjian's message of vie ago 17 11:17:58 -0400 2012:
 On Wed, Dec 14, 2011 at 10:57:25AM -0500, Robert Haas wrote:
  On Wed, Dec 14, 2011 at 4:45 AM, Magnus Hagander mag...@hagander.net 
  wrote:
   * There are a number of things that are always written to stdout, that
   there is no way to redirect. In some cases it's interactive prompts -
   makes sense - but also for example the output of \timing goes to
   stdout always. Is there some specific logic behind what/when this
   should be done?
  
   Everything that is not an error goes to stdout, no?  Except the query
   output, if you change it.
  
   Maybe the way to do what you want is to invent a new setting that
   temporarily changes stdout.
  
   Yeah, that might be it. Or I need separate settings for put errors in
   the query output stream and put non-query-output-but-also-non-errors
   in the query output stream. The effect would be the same, I guess...
  
  That seems an awful lot harder (and messier) than just changing the
  all the call sites to use the same error-reporting function.
 
 I have done as you suggested with the attached patch.

The very first hunk in your patch changes code that seems to be
explicitely checking the interactive flag.  Is the change really
wanted there?  Note Magnus explicitely commented about those in his
original post.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql output locations

2012-08-17 Thread Bruce Momjian
On Fri, Aug 17, 2012 at 12:22:38PM -0400, Alvaro Herrera wrote:
 Excerpts from Bruce Momjian's message of vie ago 17 11:17:58 -0400 2012:
  On Wed, Dec 14, 2011 at 10:57:25AM -0500, Robert Haas wrote:
   On Wed, Dec 14, 2011 at 4:45 AM, Magnus Hagander mag...@hagander.net 
   wrote:
* There are a number of things that are always written to stdout, that
there is no way to redirect. In some cases it's interactive prompts -
makes sense - but also for example the output of \timing goes to
stdout always. Is there some specific logic behind what/when this
should be done?
   
Everything that is not an error goes to stdout, no?  Except the query
output, if you change it.
   
Maybe the way to do what you want is to invent a new setting that
temporarily changes stdout.
   
Yeah, that might be it. Or I need separate settings for put errors in
the query output stream and put non-query-output-but-also-non-errors
in the query output stream. The effect would be the same, I guess...
   
   That seems an awful lot harder (and messier) than just changing the
   all the call sites to use the same error-reporting function.
  
  I have done as you suggested with the attached patch.
 
 The very first hunk in your patch changes code that seems to be
 explicitely checking the interactive flag.  Is the change really
 wanted there?  Note Magnus explicitely commented about those in his
 original post.

I noticed that but the output would be the same because there is no
input file location to trigger.  I thought the interactive flag was
there just to provide more customized text.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Not HOT enough

2012-08-17 Thread Robert Haas
On Thu, Aug 16, 2012 at 8:59 PM, Bruce Momjian br...@momjian.us wrote:
 Did we want to apply this?

Tom and I both opined upthread that it wasn't safe.  Nothing's
happened since then to change my mind.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \set vs \copy - bug or expected behaviour?

2012-08-17 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Fri, Oct 21, 2011 at 05:31:41PM -0400, Robert Haas wrote:
 I'm not sure whether that's a bug per se, but I can see where a
 behavior change might be an improvement.

 I did some research on this and learned a little more about flex rules.

 Turns out we can allow variable substitution in psql whole-line
 commands, like \copy and \!, by sharing the variable expansion flex
 rules with the code that does argument processing.  

Well, it'd be nice to allow substitution there ...

 What we can't easily do is to allow quotes to prevent variable
 substitution in these whole-line commands because we can't process the
 quotes because that will remove them.

... but if there is then no way to prevent it, that's absolutely
unacceptable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \set vs \copy - bug or expected behaviour?

2012-08-17 Thread Richard Huxton

On 17/08/12 18:38, Tom Lane wrote:

Bruce Momjianbr...@momjian.us  writes:

On Fri, Oct 21, 2011 at 05:31:41PM -0400, Robert Haas wrote:

I'm not sure whether that's a bug per se, but I can see where a
behavior change might be an improvement.



I did some research on this and learned a little more about flex rules.



Turns out we can allow variable substitution in psql whole-line
commands, like \copy and \!, by sharing the variable expansion flex
rules with the code that does argument processing.


Well, it'd be nice to allow substitution there ...


What we can't easily do is to allow quotes to prevent variable
substitution in these whole-line commands because we can't process the
quotes because that will remove them.


... but if there is then no way to prevent it, that's absolutely
unacceptable.


If I'm understanding this correctly, \copy parsing just passes the query 
part unaltered as part of a COPY statement back into the top-level 
parser. Likewise with the \!shell stuff (but presumably to execve).


To handle variable-substitution correctly for \copy we'd need to 
duplicate the full parsing for COPY. For \! we'd need something which 
understood shell-syntax (for the various shells out there). Ick.


Or you'd need a separate variable-bracketing {{:x}} syntax that could 
work like reverse dollar-quoting. Also Ick.


As far as we know this has only inconvenienced one person (me) badly 
enough to report a maybe-bug. Thanks for trying Bruce, but I fear this 
is one itch that'll go unscratched.


Rest assured I'm not about to storm off and replace all my installations 
with MySQL :-)


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \set vs \copy - bug or expected behaviour?

2012-08-17 Thread Bruce Momjian
On Fri, Aug 17, 2012 at 06:55:14PM +0100, Richard Huxton wrote:
 Well, it'd be nice to allow substitution there ...
 
 What we can't easily do is to allow quotes to prevent variable
 substitution in these whole-line commands because we can't process the
 quotes because that will remove them.
 
 ... but if there is then no way to prevent it, that's absolutely
 unacceptable.
 
 If I'm understanding this correctly, \copy parsing just passes the
 query part unaltered as part of a COPY statement back into the
 top-level parser. Likewise with the \!shell stuff (but presumably to
 execve).
 
 To handle variable-substitution correctly for \copy we'd need to
 duplicate the full parsing for COPY. For \! we'd need something
 which understood shell-syntax (for the various shells out there).
 Ick.
 
 Or you'd need a separate variable-bracketing {{:x}} syntax that
 could work like reverse dollar-quoting. Also Ick.
 
 As far as we know this has only inconvenienced one person (me) badly
 enough to report a maybe-bug. Thanks for trying Bruce, but I fear
 this is one itch that'll go unscratched.
 
 Rest assured I'm not about to storm off and replace all my
 installations with MySQL :-)

Good analysis.  Basically we can't hope to fully understand COPY or
shell quoting syntax well enough to properly replace only unquoted psql
variable references.

Therefore, unless I hear otherwise, I will just document the limitation
and withdraw the patch.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] NOT NULL constraints in foreign tables

2012-08-17 Thread Alvaro Herrera
Hi,

I noticed one more problem with NOT NULL constraints and foreign tables
-- which is that they are allowed at all (see also
http://archives.postgresql.org/message-id/1345214955-sup-3...@alvh.no-ip.org
earlier today).

Right now, with my patch, foreign table creation fails if you have a NOT
NULL column, because that NOT NULL will create a check constraint, and
those are disallowed for foreign tables.  So while HEAD allows you to
create the table, my patch causes that to fail.

There are two things I could do about this:

1. Avoid creating the CHECK constraint for NOT NULL declarations in
foreign tables.  This is the easiest to code, but it leaves us in the
situation that foreign tables will lack pg_constraint rows for NOT NULL
columns.  Not sure how undesirable this is.

2. Avoid having DefineRelation complain if it sees CHECK constraints
which correspond to some NOT NULL column declaration.  This is more
complex to implement, because basically we'd have to decompile the CHECK
constraint to find out whether it corresponds to something that had
previously been declared as just NOT NULL, and throw an error otherwise.
But this would mean we would have pg_constraint rows for those columns
... which I'm not really sure is a good thing.

I mean, what are NOT NULL in foreign tables for?  Are they harmed or
helped by having pg_constraint rows?

Thanks.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT NULL constraints in foreign tables

2012-08-17 Thread Tom Lane
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 I mean, what are NOT NULL in foreign tables for?

There was previous discussion about that, in the context of check
constraints in general, but I don't believe we reached consensus.
http://archives.postgresql.org/message-id/1038.1331738...@sss.pgh.pa.us

There's also an open commitfest entry for changing the behavior
of file_fdw, which IMO is completely premature since we haven't
got consensus on what it should do.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT NULL constraints in foreign tables

2012-08-17 Thread Robert Haas
On Fri, Aug 17, 2012 at 2:58 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 I mean, what are NOT NULL in foreign tables for?  Are they harmed or
 helped by having pg_constraint rows?

As I've mentioned when this has come up before, I think that
constraints on foreign tables should be viewed as declarative
statements about the contents of the foreign data that the DB will
assume true.  This could be useful for a variety of purposes:
constraint exclusion, query optimization, etc.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2012-08-17 Thread Ross Reedstrom
On Wed, Aug 15, 2012 at 10:26:55PM -0400, Bruce Momjian wrote:
 On Mon, Nov 14, 2011 at 04:19:30PM -0600, Ross Reedstrom wrote:
  On Wed, Sep 28, 2011 at 11:47:51AM -0700, David Fetter wrote:
   On Wed, Sep 28, 2011 at 02:25:44PM -0400, Gurjeet Singh wrote:
On Wed, Sep 28, 2011 at 1:51 PM, Kevin Grittner 
kevin.gritt...@wicourts.gov
 wrote:

 Alvaro Herrera alvhe...@commandprompt.com wrote:

  See ON_ERROR_ROLLBACK
  http://www.postgresql.org/docs/9.0/static/app-psql.html

 I had missed that.  Dang, this database product is rich with nice
 features!  :-)


+1

I would like it to be on/interactive by default, though.
   
   You can have it by putting it in your .psqlrc.
   
   If we were just starting out, I'd be all for changing the defaults,
   but we're not.  We'd break things unnecessarily if we changed this
   default.
   
  
  This discussion died out with a plea for better documentation, and perhaps 
  some
  form of discoverability. I've scanned ahead and see no further discussion.
  However, I'm wondering, what use-cases would be broken by setting the 
  default
  to 'interactive'? Running a non-interactive script by piping it to psql?
  Reading the code, I see that case is covered: the definition of 
  'interactive'
  includes both stdin and stdout are a tty, and the source of commands is 
  stdin.
  Seems this functionality appeared in version 8.1.  Was there discussion re:
  making it the default at that time?  I'm all for backward compatibility, 
  but I'm
  having trouble seeing what would break.
  
  I see that Peter blogged about this from a different angle over a year ago
  (http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html)
  which drew a comment from Tom Lane that perhaps we need a better/different 
  tool
  for running scripts. That would argue the defaults for psql proper should 
  favor
  safe interactive use (autocommit off, anyone?) Peter mentioned the 
  traditional
  method unix shells use to handle this: different config files are read for
  interactive vs. non-interactive startup. Seems we have that, just for the 
  one
  setting ON_ERROR_ROLLBACK.
 
 What documentation improvement are you suggesting?  The docs seem clear
 to me.

Wow, that's a blast from the past: November. I think I wasn't looking for docs
changes, just suggested that the thread ended with a plea from others for docs.
I was wondering what supposed breakage would occur by changing the default psql
ON_ERROR_ROLLBACK behavior to 'interactive', since the code guards that pretty
hard to make sure it's a human in a terminal, not a redirect or script.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


 
 -- 
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com
 
   + It's impossible for everything to be true. +
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT NULL constraints in foreign tables

2012-08-17 Thread Alvaro Herrera
Excerpts from Robert Haas's message of vie ago 17 15:44:29 -0400 2012:
 On Fri, Aug 17, 2012 at 2:58 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  I mean, what are NOT NULL in foreign tables for?  Are they harmed or
  helped by having pg_constraint rows?
 
 As I've mentioned when this has come up before, I think that
 constraints on foreign tables should be viewed as declarative
 statements about the contents of the foreign data that the DB will
 assume true.  This could be useful for a variety of purposes:
 constraint exclusion, query optimization, etc.

So pg_constraint rows for NOT NULLs are a good thing, right?

In general, it seems to me that you're saying we should just lift the
DefineRelation-enforced restriction that foreign tables ought not to
have constraints.  So if the user wants to specify

CREATE FOREIGN TABLE people (
who person CHECK (who IS OF TYPE 'human'),
..
) server foobar;

we ought to let them.  Correct?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2012-08-17 Thread Robert Haas
On Thu, Aug 16, 2012 at 9:11 PM, Bruce Momjian br...@momjian.us wrote:
 Was this resolved?  (Sorry to be bugging everyone.)

Nope.  I've got some ideas, but not enough round tuits.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Planner avoidance of index only scans for partial indexes

2012-08-17 Thread Josh Berkus
Merlin,

Thanks!

 The takeaway is that for this query I'm seeing end to end query
 execution improvement of 58% in the worst case (warm cache) and 20x or
 more in the best case when the cache is cold: that 22 sec time is
 unpredictable and can run into the minutes as the index randomizes the
 i/o to the heap (the table is not clustered on this index nor can it
 be).   This 'best case' is extremely important to us as the generally
 the first thing the user sees when firing up the application the first
 time and with 9.2 it's a 'night and day' experience.  Here is the
 bottom line in 9.1 vs 9.2 IOS:

So I could easily say: An up to 20X speedup on data warehousing
queries,, no?  Or A between 2X and 20X speed increase on data
warehousing queries.

Of course, I'd love to see you able to blog the detail ...


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-08-17 Thread Fabrízio de Royes Mello
2012/8/17 Alvaro Herrera alvhe...@2ndquadrant.com

 Excerpts from Dickson S. Guedes's message of vie ago 17 10:37:25 -0400
 2012:
  2012/8/17 Fabrízio de Royes Mello fabriziome...@gmail.com:
  
   2012/8/17 Alvaro Herrera alvhe...@2ndquadrant.com
  
   Excerpts from Fabrízio de Royes Mello's message of vie ago 17 09:16:30
   -0400 2012:
  
The attached patch contains regression tests for it.
  
   I think you're missing support in copyfuncs.c and equalfuncs.c for the
   new field in the node.
  
   You're completely right, thanks...
  
   The attached patch add support for the new field in the node in
   copyfuncs.c and equalfuncs.c.
 
  Maybe I'm missing something but shouldn't it being handled in
 extension.c too?

 Please be more explicit?  I don't know what you mean.


Returning conversation to the list.

I think he talked about this piece of code:

diff --git a/src/backend/commands/extension.c
b/src/backend/commands/extension.c
index 8512cdb..e359a9c 100644
--- a/src/backend/commands/extension.c
+++ b/src/backend/commands/extension.c
@@ -1374,6 +1374,7 @@ CreateExtension(CreateExtensionStmt *stmt)
csstmt-schemaname = schemaName;
csstmt-authid = NULL;  /* will be created
by current user */
csstmt-schemaElts = NIL;
+   csstmt-if_not_exists = false;
CreateSchemaCommand(csstmt, NULL);

/*

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


create_schema_if_not_exists_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] asynchronous disk io (was : tuplesort memory usage)

2012-08-17 Thread johnlumby


Date: Fri, 17 Aug 2012 00:26:37 +0100 From: Peter Geoghegan 
pe...@2ndquadrant.com To: Jeff Janes jeff.ja...@gmail.com Cc: 
pgsql-hackers pgsql-hackers@postgresql.org Subject: Re: tuplesort 
memory usage: grow_memtuples Message-ID: 
caeylb_vezpkdx54vex3x30oy_uoth89xoejjw6aucjjiujs...@mail.gmail.com 
On 27 July 2012 16:39, Jeff Janes jeff.ja...@gmail.com wrote:

  Can you suggest a benchmark that will usefully exercise this patch?


  I think the given sizes below work on most 64 bit machines.

  [...]

I think this patch (or at least your observation about I/O waits
within vmstat) may point to a more fundamental issue with our sort
code: Why are we not using asynchronous I/O in our implementation?
There are anecdotal reports of other RDBMS implementations doing far
better than we do here, and I believe asynchronous I/O, pipelining,
and other such optimisations have a lot to do with that. It's
something I'd hoped to find the time to look at in detail, but
probably won't in the 9.3 cycle. One of the more obvious ways of
optimising an external sort is to use asynchronous I/O so that one run
of data can be sorted or merged while other runs are being read from
or written to disk. Our current implementation seems naive about this.
There are some interesting details about how this is exposed by POSIX
here:

http://www.gnu.org/software/libc/manual/html_node/Asynchronous-I_002fO.html


I've recently tried extending the postgresql prefetch mechanism on linux 
to use the posix (i.e. librt)
aio_read and friends where possible. In other words,   in 
PrefetchBuffer(),   try getting a buffer
and issuing aio_read before falling back to fposix_advise(). It 
gives me about 8% improvement
in throughput relative to the fposix-advise variety, for a workload of 
16 highly-disk-read-intensive applications running to 16 backends.
For my test each application runs a query chosen to have plenty of 
bitmap heap scans.


I can provide more details on my changes if interested.

On whether this technique might improve sort performance  :

First,  the disk access pattern for sorting is mostly sequential  
(although I think
the sort module does some tricky work with reuse of pages in its 
logtape files
which maybe is random-like),  and there are several claims on the net 
that linux buffered file  handling
already does a pretty good job of read-ahead for a sequential access 
pattern

without any need for the application to help it.
I can half-confirm that in that I tried adding calls to PrefetchBuffer 
in regular heap scan

and did not see much improvement.But I am still pursuing that area.

But second,  it would be easy enough to add some fposix_advise calls to 
sort and see whether
that helps.(Can't make use of PrefetchBuffer since sort does not use 
the regular relation buffer pool)






It's already anticipated that we might take advantage of libaio for
the benefit of FilePrefetch() (see its accompanying comments - it uses
posix_fadvise itself - effective_io_concurrency must be  0 for this
to ever be called). It perhaps could be considered parallel
low-hanging fruit in that it allows us to offer limited though
useful backend parallelism without first resolving thorny issues
around what abstraction we might use, or how we might eventually make
backends thread-safe. AIO supports registering signal callbacks (a
SIGPOLL handler can be called), which seems relatively
uncontroversial.


I believe libaio is dead,   as it depended on the old linux kernel 
asynchronous file io,

which was problematic and imposed various restrictions on the application.
librt aio has no restrictions and does a good enough job but uses pthreads
and synchronous io, which can make CPU overhead a bit heavy and also I 
believe

results in causing more context switching than with synchronous io,
whereas one of the benefits of kernel async io (in theory) is reduce 
context switching.


From what I've seen,  pthreads aio can give a benefit when there is 
high IO wait
from mostly-read activity,  the disk access pattern is not sequential 
(so kernel readahead
cant predict it) but postgresql can predict it,  and there's enough 
spare idle CPU to
run the pthreads.So it does seem that bitmap heap scan is a good 
choice for prefetching.






Platform support for AIO might be a bit lacking, but then you can say
the same about posix_fadvise. We don't assume that poll(2) is
available, but we already use it where it is within the latch code.
Besides, in-kernel support can be emulated if POSIX threads is
available, which I believe would make this broadly useful on unix-like
platforms.

-- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 
24x7 Support, Training and Services




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT NULL constraints in foreign tables

2012-08-17 Thread Euler Taveira
On 17-08-2012 16:44, Robert Haas wrote:
 On Fri, Aug 17, 2012 at 2:58 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
 I mean, what are NOT NULL in foreign tables for?  Are they harmed or
 helped by having pg_constraint rows?
 
 As I've mentioned when this has come up before, I think that
 constraints on foreign tables should be viewed as declarative
 statements about the contents of the foreign data that the DB will
 assume true.  This could be useful for a variety of purposes:
 constraint exclusion, query optimization, etc.
 
+1. I don't see us providing a mechanism to cross-check changes between data
sources. Even if we do it for creation time, schema could be changed behind
the scenes. Let's use at least constraints (NOT NULL, CHECK, UNIQUE, PK --
UNIQUE + NOT NULL) to improve optimizer but warn (loudly) that those
constraints are merely for optimization.


-- 
   Euler Taveira de Oliveira - Timbira   http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New WAL code dumps core trivially on replay of bad data

2012-08-17 Thread Tom Lane
I just had HEAD fail to recover after a backend core dump, because the
startup process dumped core itself during replay:

LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2012-08-17 20:47:37 EDT
LOG:  database system was not properly shut down; automatic recovery in progress
LOG:  startup process (PID 9418) was terminated by signal 11
LOG:  aborting startup due to startup process failure

The startup process's stack trace is

#0  0x26fd1c in RecordIsValid (record=0x4008d7a0, recptr=80658424, emode=15)
at xlog.c:3713
3713COMP_CRC32(crc, XLogRecGetData(record), len);
(gdb) bt
#0  0x26fd1c in RecordIsValid (record=0x4008d7a0, recptr=80658424, emode=15)
at xlog.c:3713
#1  0x270690 in ReadRecord (RecPtr=0x7b03bad0, emode=15, 
fetching_ckpt=0 '\000') at xlog.c:4006
#2  0x2761e8 in StartupXLOG () at xlog.c:6550
#3  0x463d28 in StartupProcessMain () at startup.c:222
#4  0x2ade78 in AuxiliaryProcessMain (argc=2, argv=0x7b03b850)
at bootstrap.c:418
#5  0x4631e0 in StartChildProcess (type=StartupProcess) at postmaster.c:4564
#6  0x461ddc in PostmasterStateMachine () at postmaster.c:3167
#7  0x460e48 in reaper (postgres_signal_arg=1074321312) at postmaster.c:2609

The current WAL address is 80658424 == 0x04cebff8, that is just 8 bytes
short of a page boundary, and what RecordIsValid thinks it is dealing
with is

(gdb) p *record
$1 = {xl_tot_len = 0, xl_xid = 0, xl_len = 1074108016, xl_info = 0 '\000', 
  xl_rmid = 0 '\000', xl_prev = 412316860416, xl_crc = 64}
(gdb) x/32 record
0x4008d7a0: 0x  0x  0x40059670  0x8009
0x4008d7b0: 0x0060  0x  0x0040  0x1000
0x4008d7c0: 0x  0x04cebec0  0x620583d8  0x
0x4008d7d0: 0x  0x04cebf98  0x0001  0x0100
0x4008d7e0: 0x  0x1947  0xc000  0x0001
0x4008d7f0: 0x  0x029a  0x0001  0x
0x4008d800: 0x  0x502ee5c0  0x  0x
0x4008d810: 0x  0x  0x  0x

so it merrily tries to compute a checksum on a gigabyte worth of data,
and soon falls off the end of memory.

In reality, inspection of the WAL file suggests that this is the end of
valid data and what should have happened is that replay just stopped.
The xl_len and so forth shown above are just garbage from off the end of
what was actually read from the file (everything beyond offset 0xcebff8
in file 4 is in fact zeroes).

I'm not sure whether this is just a matter of having failed to
sanity-check that xl_tot_len is at least SizeOfXLogRecord, or whether
there is a deeper problem with the new design of continuation records
that makes it impossible to validate records safely.

Certainly it's completely insane for RecordIsValid to be trusting xl_len
so implicitly as it does, and to be blithely CRC'ing backup blocks that
could extend off the known length of the record as well --- that is,
checking only at line 3738 to see if we've fallen off the end of memory
is a tad too late.  I realize that that code was like that before, but
apparently the earlier logic protected it to a greater extent than it
does now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CLUSTER VERBOSE tab completion

2012-08-17 Thread Jeff Janes
tab completion will add USING after CLUSTER VERBOSE, as if VERBOSE
were the name of a table.

Instead of just making it not do the wrong thing, I tried to make it
actually do the right thing.

It doesn't fill in the VERBOSE for you, you have to type that in full,
but then it completes table names afterward (and USING and index names
after that)

Cheers,

Jeff


cluster_verbose_complete_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New WAL code dumps core trivially on replay of bad data

2012-08-17 Thread Amit kapila
Tom Lane Sent: Saturday, August 18, 2012 7:16 AM

 The startup process's stack trace is

 #0  0x26fd1c in RecordIsValid (record=0x4008d7a0, recptr=80658424, emode=15)
at xlog.c:3713
 3713COMP_CRC32(crc, XLogRecGetData(record), len);
 (gdb) bt
 #0  0x26fd1c in RecordIsValid (record=0x4008d7a0, recptr=80658424, emode=15)
at xlog.c:3713
 #1  0x270690 in ReadRecord (RecPtr=0x7b03bad0, emode=15,
fetching_ckpt=0 '\000') at xlog.c:4006

 The current WAL address is 80658424 == 0x04cebff8, that is just 8 bytes
 short of a page boundary, and what RecordIsValid thinks it is dealing
 with is


 so it merrily tries to compute a checksum on a gigabyte worth of data,
 and soon falls off the end of memory.

 In reality, inspection of the WAL file suggests that this is the end of
 valid data and what should have happened is that replay just stopped.
 The xl_len and so forth shown above are just garbage from off the end of
 what was actually read from the file (everything beyond offset 0xcebff8
 in file 4 is in fact zeroes).

 I'm not sure whether this is just a matter of having failed to
 sanity-check that xl_tot_len is at least SizeOfXLogRecord, or whether
 there is a deeper problem with the new design of continuation records
 that makes it impossible to validate records safely.

Earlier there was a check related to total length in ReadRecord, before it 
calls RecordIsValid()
 if (record-xl_tot_len  SizeOfXLogRecord + record-xl_len ||
   record-xl_tot_len  SizeOfXLogRecord + record-xl_len +
 XLR_MAX_BKP_BLOCKS * (sizeof(BkpBlock) + BLCKSZ))

I think that missing check of total length has caused this problem. However now 
this check will be different. 

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers