Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread hubert depesz lubaczewski
On Wed, Aug 31, 2011 at 09:54:20PM -0400, Bruce Momjian wrote:
> Working with depesz, I have found the cause.  The code I added to fix
> pg_upgrade in 9.0.4 and earlier releases didn't handle old 8.3 servers
> properly.  I mistakenly processed toast table with the same pg_dump
> query as used for pre-8.4 toast tables, not realizing those were not
> functional because there were no reloptions for toast tables in pre-8.4.

Thanks a lot. Will test and post results (around sunday/monday I guess).

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] toast error after pg_upgrade 9.0.4 -> 9.1rc1

2011-08-31 Thread Lou Picciano


- Original Message -
From: "Bruce Momjian"  
To: "Lou Picciano"  
Cc: pgsql-hackers@postgresql.org 
Sent: Wednesday, August 31, 2011 10:38:01 PM 
Subject: Re: [HACKERS] toast error after pg_upgrade 9.0.4 -> 9.1rc1 

Lou Picciano wrote: 
> After running an essentially uneventful* pg_upgrade from 9.0.4 -> 
> 9.1rc1, we are seeing some toast errors logged on the new cluster: 
> 
> All are of this pattern: ERROR: missing chunk number 0 for toast value 
> 130087 in pg_toast_34735 
> 
> Have seen the same pattern for a few of the databases in the 9.1rc1 
> cluster, and all as a result of a select on a usr table (the offending 
> SELECT happens to be the first one any of these DBs sees, as it's the 
> first step in a user authentication process). SELECT count(*) does not 
> produce an error. 
> 
> *almost uneventful: We also saw messages that the destination cluster 
> did not have one of our schema - (of course it didn't!) - I didn't 
> realize pg_upgrade doesn't 'do' schema? 

I have fixed two errors in pg_upgrade since 9.1rc1 was released. Can 
you use git 9.1 head or wait for 9.1rc2 or 9.1 final? I can email you 
patches if you prefer. 

-- 
Bruce Momjian  http://momjian.us 
EnterpriseDB http://enterprisedb.com 

+ It's impossible for everything to be true. + 

Bruce, many thanks. I've done the recent git pull; yes, will build head from 
there and send you the new mileage report. 

(Glad your weekend with Irene is finally over - we had a few crises down here 
in New York; my fish were swimming in the back yard, lots of flooding, a few 
trees down - one of which only prevented from falling on the house by the 16K 
volt primary line. Great! But we were luckier than many. ) 

Lou Picciano 


Re: [HACKERS] toast error after pg_upgrade 9.0.4 -> 9.1rc1

2011-08-31 Thread Bruce Momjian
Bruce Momjian wrote:
> Lou Picciano wrote:
> > After running an essentially uneventful* pg_upgrade from 9.0.4 ->
> > 9.1rc1, we are seeing some toast errors logged on the new cluster:
> > 
> > All are of this pattern: ERROR: missing chunk number 0 for toast value
> > 130087 in pg_toast_34735
> > 
> > Have seen the same pattern for a few of the databases in the 9.1rc1
> > cluster, and all as a result of a select on a usr table (the offending
> > SELECT happens to be the first one any of these DBs sees, as it's the
> > first step in a user authentication process). SELECT count(*) does not
> > produce an error.
> > 
> > *almost uneventful: We also saw messages that the destination cluster
> > did not have one of our schema - (of course it didn't!) - I didn't
> > realize pg_upgrade doesn't 'do' schema?

> I have fixed two errors in pg_upgrade since 9.1rc1 was released.  Can
> you use git 9.1 head or wait for 9.1rc2 or 9.1 final?  I can email you
> patches if you prefer.

Thinking some more, none of these errors was fixed by the patches I
applied.

The schema error seems very odd --- pg_upgrade certainly handles
schemas.  In fact, any error makes pg_upgrade stop, so I am curious what
the error was.  Did the upgrade fail and you just started the new
server?  That isn't good.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] toast error after pg_upgrade 9.0.4 -> 9.1rc1

2011-08-31 Thread Bruce Momjian
Lou Picciano wrote:
> After running an essentially uneventful* pg_upgrade from 9.0.4 ->
> 9.1rc1, we are seeing some toast errors logged on the new cluster:
> 
> All are of this pattern: ERROR: missing chunk number 0 for toast value
> 130087 in pg_toast_34735
> 
> Have seen the same pattern for a few of the databases in the 9.1rc1
> cluster, and all as a result of a select on a usr table (the offending
> SELECT happens to be the first one any of these DBs sees, as it's the
> first step in a user authentication process). SELECT count(*) does not
> produce an error.
> 
> *almost uneventful: We also saw messages that the destination cluster
> did not have one of our schema - (of course it didn't!) - I didn't
> realize pg_upgrade doesn't 'do' schema?

I have fixed two errors in pg_upgrade since 9.1rc1 was released.  Can
you use git 9.1 head or wait for 9.1rc2 or 9.1 final?  I can email you
patches if you prefer.

--
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
daveg wrote:
> On Mon, Aug 29, 2011 at 07:49:24PM +0200, hubert depesz lubaczewski wrote:
> > On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:
> > vacuumdb: vacuuming of database "etsy_v2" failed: ERROR:  could not access 
> > status of transaction 3429738606
> > DETAIL:  Could not open file "pg_clog/0CC6": No such file or directory.
> > 
> > Interestingly.
> > 
> > In old dir there is pg_clog directory with files:
> > 0AC0 .. 0DAF (including 0CC6, size 262144)
> > but new pg_clog has only:
> > 0D2F .. 0DB0
> > 
> > File content - nearly all files that exist in both places are the same, 
> > with exception of 2 newest ones in new datadir:
> > 3c5122f3e80851735c19522065a2d12a  0DAF
> > 8651fc2b9fa3d27cfb5b496165cead68  0DB0
> > 
> > 0DB0 doesn't exist in old, and 0DAF has different md5sum: 
> > 7d48996c762d6a10f8eda88ae766c5dd
> > 
> > one more thing. I did select count(*) from transactions and it worked.
> > 
> > that's about it. I can probably copy over files from old datadir to new (in
> > pg_clog/), and will be happy to do it, but I'll wait for your call - retry 
> > with
> > copies files might destroy some evidence.
> 
> I had this same thing happen this Saturday just past and my client had to
> restore the whole 2+ TB instance from the previous days pg_dumps.
> I had been thinking that perhaps I did something wrong in setting up or
> running the upgrade, but had not found it yet. Now that I see Hubert has
> the same problem it is starting to look like pg_upgrade can eat all your
> data.
> 
> After running pg_upgrade apparently successfully and analyzeing all the
> tables we restarted the production workload and started getting errors:
> 
> 2011-08-27 04:18:34.015  12337  c06  postgres  ERROR:  could not access 
> status of transaction 2923961093
> 2011-08-27 04:18:34.015  12337  c06  postgres  DETAIL:  Could not open file 
> "pg_clog/0AE4": No such file or directory.
> 2011-08-27 04:18:34.015  12337  c06  postgres  STATEMENT:  analyze 
> public.b_pxx;
> 
> On examination the pg_clog directory contained on two files timestamped
> after the startup of the new cluster with 9.0.4. Other hosts that upgraded
> successfully had numerous files in pg_clog dating back a few days. So it
> appears that all the clog files went missing during the upgrade somehow.
> a
> This happened upgrading from 8.4.7 to 9.0.4, with a brief session in between
> at 8.4.8. We have upgraded several hosts to 9.0.4 successfully previously.

I have posted this fix to the hackers email list, but I found it only
affected old 8.3 servers, not old 8.4.X, so I am confused by your bug
report.

I have tested 8.4.X to 9.0.4 and found pg_upgrade preserves toast
relfrozenxids properly in that case.

Can you tell me what table is showing this error?  Does it happen during
vacuum?  Can you run a vacuum verbose to see what it is throwing the
error on?  Thanks.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
hubert depesz lubaczewski wrote:
> On Wed, Aug 31, 2011 at 01:23:05PM -0400, Bruce Momjian wrote:
> > Can you get me the 9.0.X pg_class.relfrozenxid for the toast and heap
> > tables involved?
> 
> Sure:
> 
> =# select oid::regclass, relfrozenxid from pg_class  where relname in 
> ('transactions', 'pg_toast_106668498');
>  oid | relfrozenxid 
> -+--
>  pg_toast.pg_toast_106668498 |   3673553926
>  transactions|   3623560321
> (2 rows)

Working with depesz, I have found the cause.  The code I added to fix
pg_upgrade in 9.0.4 and earlier releases didn't handle old 8.3 servers
properly.  I mistakenly processed toast table with the same pg_dump
query as used for pre-8.4 toast tables, not realizing those were not
functional because there were no reloptions for toast tables in pre-8.4.

The attached applied patches fix all releases.  This will have to be
mentioned in the 9.0.5 release notes, and we should probably do the same
kind of announcement we did when I fixed this for 9.0.4.  :-(

Yeah, I should not have caused this bug.  It did not show up in any of
my testing.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
new file mode 100644
index b00e19b..c5816ae
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
*** getTables(int *numTables)
*** 3256,3269 
  		 * owning column, if any (note this dependency is AUTO as of 8.2)
  		 */
  		appendPQExpBuffer(query,
! 		  "SELECT c.tableoid, c.oid, relname, "
! 		  "relacl, relkind, relnamespace, "
! 		  "(%s relowner) AS rolname, "
! 		  "relchecks, (reltriggers <> 0) AS relhastriggers, "
! 		  "relhasindex, relhasrules, relhasoids, "
! 		  "relfrozenxid, "
! 		  "0 AS toid, "
! 		  "0 AS tfrozenxid, "
  		  "d.refobjid AS owning_tab, "
  		  "d.refobjsubid AS owning_col, "
  		  "(SELECT spcname FROM pg_tablespace t WHERE t.oid = c.reltablespace) AS reltablespace, "
--- 3256,3268 
  		 * owning column, if any (note this dependency is AUTO as of 8.2)
  		 */
  		appendPQExpBuffer(query,
! 		  "SELECT c.tableoid, c.oid, c.relname, "
! 		  "c.relacl, c.relkind, c.relnamespace, "
! 		  "(%s c.relowner) AS rolname, "
! 		  "c.relchecks, (c.reltriggers <> 0) AS relhastriggers, "
! 		  "c.relhasindex, c.relhasrules, c.relhasoids, "
! 		  "c.relfrozenxid, tc.oid AS toid, "
! 		  "tc.relfrozenxid AS tfrozenxid, "
  		  "d.refobjid AS owning_tab, "
  		  "d.refobjsubid AS owning_col, "
  		  "(SELECT spcname FROM pg_tablespace t WHERE t.oid = c.reltablespace) AS reltablespace, "
*** getTables(int *numTables)
*** 3275,3281 
  		  "d.classid = c.tableoid AND d.objid = c.oid AND "
  		  "d.objsubid = 0 AND "
  		  "d.refclassid = c.tableoid AND d.deptype = 'a') "
! 		  "WHERE relkind in ('%c', '%c', '%c', '%c') "
  		  "ORDER BY c.oid",
  		  username_subquery,
  		  RELKIND_SEQUENCE,
--- 3274,3281 
  		  "d.classid = c.tableoid AND d.objid = c.oid AND "
  		  "d.objsubid = 0 AND "
  		  "d.refclassid = c.tableoid AND d.deptype = 'a') "
! 	   "LEFT JOIN pg_class tc ON (c.reltoastrelid = tc.oid) "
! 		  "WHERE c.relkind in ('%c', '%c', '%c', '%c') "
  		  "ORDER BY c.oid",
  		  username_subquery,
  		  RELKIND_SEQUENCE,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
new file mode 100644
index d6a547f..b73392b
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
*** getTables(int *numTables)
*** 3516,3529 
  		 * owning column, if any (note this dependency is AUTO as of 8.2)
  		 */
  		appendPQExpBuffer(query,
! 		  "SELECT c.tableoid, c.oid, relname, "
! 		  "relacl, relkind, relnamespace, "
! 		  "(%s relowner) AS rolname, "
! 		  "relchecks, (reltriggers <> 0) AS relhastriggers, "
! 		  "relhasindex, relhasrules, relhasoids, "
! 		  "relfrozenxid, "
! 		  "0 AS toid, "
! 		  "0 AS tfrozenxid, "
  		  "NULL AS reloftype, "
  		  "d.refobjid AS owning_tab, "
  		  "d.refobjsubid AS owning_col, "
--- 3516,3528 
  		 * owning column, if any (note this dependency is AUTO as of 8.2)
  		 */
  		appendPQExpBuffer(query,
! 		  "SELECT c.tableoid, c.oid, c.relname, "
! 		  "c.relacl, c.relkind, c.relnamespace, "
! 		  "(%s c.relowner) AS rolname, "
! 		  "c.relchecks, (c.reltriggers <> 0) AS relhastriggers, "
! 		  "c.relhasindex, c.relhasrules, c.relhasoids, "
! 		  "c.relfrozenxid, tc.oid AS toid, "
! 		  "tc.relfrozenxid AS tfrozenxid, "
  		  "NULL AS reloftype, "
  		  "d.refobjid AS owning_tab, "
  		  "d.refobjsubid AS owning_col, "
*** getTables(int *numTables)
*** 3536,3542 
  

Re: [HACKERS] [PATCH] Generate column names for subquery expressions

2011-08-31 Thread Tom Lane
Marti Raudsepp  writes:
> In current PostgreSQL versions, subquery expressions in the SELECT list
> always generate columns with name "?column?"
> ...
> This patch improves on that:
>   select (SELECT 1 AS foo) => foo
>   select exists(SELECT 1)  => exists
>   select array(SELECT 1)   => array

> Does this sound like a good idea?

Seems like a lot of room for bikeshedding here, but we could certainly
consider doing something.

> Should I submit this to the CommitFest?

Please.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] setlocale() and gettext on Windows revisited

2011-08-31 Thread Inoue, Hiroshi

HI all,

(2011/09/01 4:30), Heikki Linnakangas wrote:

Back in January/February, a patch was committed to avoid using libintl's
version of setlocale:

http://archives.postgresql.org/pgsql-hackers/2011-01/msg02628.php

The comment says it was about a problem with printf() and friends, so I
wonder, why was that "#undef setlocale" line put inside the larger
"#ifdef USE_REPL_SNPRINTF" block? If I understand the problem correctly,
it has nothing to do with our replacement snprintf() function.

Fortunately, we always use the replacement snprintf() code on Windows,
so there's no user-visible bug here, but if you imagine that we didn't
USE_REPL_SNPRINTF on Windows, we would still want the "#undef setlocale"
to take effect, right? I think that block is misplaced.


Yes you are right.
I didn't notice "#ifdef USE_REPL_SNPRINTF" unfortunately.
The "#undef setlocale" line should be placed outside the "ifdef 
USE_REPL_SNPRINTF" block.


regards,
Hiroshi Inoue




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus nestloop join estimate, ignores WHERE clause

2011-08-31 Thread Tom Lane
Marti Raudsepp  writes:
> On Wed, Aug 31, 2011 at 23:59, Tom Lane  wrote:
>> Could we see the pg_stats rows for the two join columns?

> Sure, but I don't want to send this out to the public list since
> [ it's private data ]

Thanks for the data.  I set up a comparable test case and duplicated
your problem.  It looks like it is a variant of the same brain fade
being discussed over in pgsql-performance,
http://archives.postgresql.org/pgsql-performance/2011-08/msg00327.php

In your case, we are running through the branch of eqjoinsel_semi
that does have MCVs to play with, and that code path is effectively
not taking any account at all of restrictions applied to the inner
relation.  We need to have it clamp nd2 (and not nd1) along the same
lines as should be happening in the no-MCV-list code path.  This is
exactly the case I was thinking needed to be covered when I was
responding to Mark, and now I've got an example to prove it.

In this particular case, the estimate is probably still not going to be
that good, because you have so many empty-string keys that that one
value dominates the result.  The only way for the planner to get a real
quality estimate would be for it to know whether or not the specific
value of client_id mapped to an empty-string id_code, which would
require cross-column stats that we haven't got.  Things would get better
if you were willing to replace the empty strings with nulls, which the
planner would know couldn't match.  But I'm not sure if that is the
semantics you need.  In any case, the eqjoinsel_semi logic is broken;
will fix.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Generate column names for subquery expressions

2011-08-31 Thread Marti Raudsepp
Hi list,

In current PostgreSQL versions, subquery expressions in the SELECT list
always generate columns with name "?column?"

postgres=# select (select 1 as foo);
?column?

   1

This patch improves on that:
  select (SELECT 1 AS foo) => foo
  select exists(SELECT 1)  => exists
  select array(SELECT 1)   => array

The "array" one is now consistent with an array literal: select array[1];

Other subquery types (=ALL(), =ANY() and row comparison) don't change
because they act more like operators.

I guess it's fairly unlikely that users rely on column names being
"?column?", but it does change the name of some expressions, for example:
  select (select 1 foo)::int;
  select case when true then 1 else (select 1 as foo) end;

Previously these returned column names "int4" and "case", now they would
return "foo". Personally I prefer it this way, but if it is considered a
compatibility problem, lowering the strength of subquery names in
FigureColnameInternal would resort to the old behavior.

How this affects different queries can be seen from the regression diffs.

Does this sound like a good idea?
Should I submit this to the CommitFest?


Regards,
Marti Raudsepp
From c119ba8bf4d72a676aa1fc5a4d42c93f9902efaf Mon Sep 17 00:00:00 2001
From: Marti Raudsepp 
Date: Wed, 31 Aug 2011 23:53:04 +0300
Subject: [PATCH] Generate column names for subquery expressions

(SELECT 1 AS foo) => foo
exists(SELECT 1)  => exists
array(SELECT 1)   => array
---
 src/backend/parser/parse_target.c|   29 +
 src/test/regress/expected/aggregates.out |6 +++---
 src/test/regress/expected/subselect.out  |   12 ++--
 src/test/regress/expected/with.out   |4 ++--
 4 files changed, 40 insertions(+), 11 deletions(-)

diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 9d4e580..378d8ec 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1585,6 +1585,35 @@ FigureColnameInternal(Node *node, char **name)
 return FigureColnameInternal(ind->arg, name);
 			}
 			break;
+		case T_SubLink:
+			switch (((SubLink *) node)->subLinkType)
+			{
+case EXISTS_SUBLINK:
+	*name = "exists";
+	return 2;
+
+case ARRAY_SUBLINK:
+	*name = "array";
+	return 2;
+
+case EXPR_SUBLINK:
+	/* Get column name from the subquery's target list */
+	{
+		SubLink	   *sublink = (SubLink *) node;
+		Query	   *query = (Query *) sublink->subselect;
+		/* EXPR_SUBLINK always has a single target */
+		TargetEntry *te = (TargetEntry *) linitial(query->targetList);
+
+		/* Subqueries have already been transformed */
+		if(te->resname)
+		{
+			*name = te->resname;
+			return 2;
+		}
+	}
+	break;
+			}
+			break;
 		case T_FuncCall:
 			*name = strVal(llast(((FuncCall *) node)->funcname));
 			return 2;
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index 4861006..69926f7 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -300,9 +300,9 @@ LINE 4:where sum(distinct a.four + b.four) = b.four)...
 select
   (select max((select i.unique2 from tenk1 i where i.unique1 = o.unique1)))
 from tenk1 o;
- ?column? 
---
- 
+ max  
+--
+ 
 (1 row)
 
 --
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index e638f0a..4ea8211 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -490,20 +490,20 @@ select view_a from view_a;
 (1 row)
 
 select (select view_a) from view_a;
- ?column? 
---
+ view_a 
+
  (42)
 (1 row)
 
 select (select (select view_a)) from view_a;
- ?column? 
---
+ view_a 
+
  (42)
 (1 row)
 
 select (select (a.*)::text) from view_a a;
- ?column? 
---
+  a   
+--
  (42)
 (1 row)
 
diff --git a/src/test/regress/expected/with.out b/src/test/regress/expected/with.out
index a1b0899..c4b0456 100644
--- a/src/test/regress/expected/with.out
+++ b/src/test/regress/expected/with.out
@@ -1065,7 +1065,7 @@ with cte(foo) as ( select 42 ) select * from ((select foo from cte)) q;
 select ( with cte(foo) as ( values(f1) )
  select (select foo from cte) )
 from int4_tbl;
-  ?column?   
+ foo 
 -
0
   123456
@@ -1077,7 +1077,7 @@ from int4_tbl;
 select ( with cte(foo) as ( values(f1) )
   values((select foo from cte)) )
 from int4_tbl;
-  ?column?   
+   column1   
 -
0
   123456
-- 
1.7.6.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.conf archive_command example

2011-08-31 Thread Cédric Villemain
2011/8/31 Peter Eisentraut :
> On ons, 2011-08-31 at 11:18 +0200, Cédric Villemain wrote:
>> Just a question: can we build a different postgresql.conf for windows
>> or do we add a windows command example here as well ?
>
> Well, we could make initdb patch it up, but that might seem excessive.

sure. I was wondering if it was already possible, not proposing to do it.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-08-31 Thread Andrew Dunstan



On 08/31/2011 04:03 PM, Alvaro Herrera wrote:


Well, the Unix approach is to use tools that do one thing well to build up more 
complex tools. Making pg_dump run some external command to inject things into 
the stream seems like the wrong thing given this philosophy. Use pg_dump to get 
the bits you want (pre-data, post-data) and sandwich them around whatever else 
you want.

I agree... except for one little niggling concern: If pg_dump is injecting 
something, then the DDL is being grabbed with a single, consistent snapshot. 
--pre and --post do not get you that (though we could probably use the new 
ability to export snapshots to fix that...)

Eh, --pre and --post are pg_restore flags, so you already have a
consistent snapshot.



We've been talking about adding them for pg_dump too.

I take Jim's point about the snapshot, but I still don't feel it's a 
good reason to allow some arbitrary code or script to be run between 
them (and after all, it's not likely to run with the same snapshot anyway).



cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: SP-GiST, Space-Partitioned GiST

2011-08-31 Thread Oleg Bartunov
Hi there,

attached is WIP-patch for 9.2 development source tree, which provides
implementation of SP-GiST (prototype
was presented at PGCon-2011, see
http://www.pgcon.org/2011/schedule/events/309.en.html and presentation
for details) as a core feature.  Main differences from prototype version:

1. Now it's part of pg core, not contrib module
2. It provides more operations for quadtree and suffix tree
3. It uses clustering algorithm of nodes on disk and has much better
utilization of disk space. Fillfactor is supported
4. Some corner cases were eliminated
5. It provides support for concurency and recovery (inserts are
logged, supports for deletes, and log replay will be added really
soon)

So, now code contains almost all possible overhead of production code
and we ask hackers to test performance on real data sets. We expect
the same performance for random data (since almost no overlaps) and
much better performance on real-life data, plus much better index
creation time. Also, we appreciate your comments and suggestions about
API.

Regards,

Oleg


spgist_patch-0.84.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus nestloop join estimate, ignores WHERE clause

2011-08-31 Thread Tom Lane
Marti Raudsepp  writes:
> After a bit of digging, I figured out that it uses the same estimate
> as a semi-join WITHOUT the client_id restriction.
> ...
> For whatever reason, the 1st query completely ignores the fact that
> the client_id clause reduces the result count by a large factor.

Could we see the pg_stats rows for the two join columns?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Ensure that contrib/pgstattuple functions respond to cancel

2011-08-31 Thread Robert Haas
On Fri, Apr 2, 2010 at 12:17 PM, Tom Lane  wrote:
> Log Message:
> ---
> Ensure that contrib/pgstattuple functions respond to cancel interrupts
> reasonably promptly, by adding CHECK_FOR_INTERRUPTS in the per-page loops.
>
> Tatsuhito Kasahara

This patch seems to have overlooked pgstatindex().

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] toast error after pg_upgrade 9.0.4 -> 9.1rc1

2011-08-31 Thread Lou Picciano
After running an essentially uneventful* pg_upgrade from 9.0.4 -> 9.1rc1, we 
are seeing some toast errors logged on the new cluster: 

All are of this pattern: ERROR: missing chunk number 0 for toast value 130087 
in pg_toast_34735 

Have seen the same pattern for a few of the databases in the 9.1rc1 cluster, 
and all as a result of a select on a usr table (the offending SELECT happens to 
be the first one any of these DBs sees, as it's the first step in a user 
authentication process). SELECT count(*) does not produce an error. 

*almost uneventful: We also saw messages that the destination cluster did not 
have one of our schema - (of course it didn't!) - I didn't realize pg_upgrade 
doesn't 'do' schema? 

Lou Picciano 


Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-08-31 Thread Alvaro Herrera
Excerpts from Jim Nasby's message of mié ago 31 16:45:59 -0300 2011:
> On Aug 26, 2011, at 5:23 PM, Andrew Dunstan wrote:
> > On 08/26/2011 04:46 PM, Jim Nasby wrote:
> >> On Aug 26, 2011, at 12:15 PM, Andrew Dunstan wrote:
> >>> I knew there would be some bike-shedding about how we specify these 
> >>> things, which is why I haven't written docs yet.
> >> While we're debating what shade of yellow to paint the shed...
> >> 
> >> My actual use case is to be able to be able to "inject" SQL into a 
> >> SQL-formatted dump either pre- or post-data (I'm on 8.3, so I don't 
> >> actually dump any data; I'm *mostly* emulating the ability to dump data on 
> >> just certain tables).
> >> 
> >> So for what I'm doing, the ideal interface would be a way to tell pg_dump 
> >> "When you're done dumping all table structures but before you get to any 
> >> constraints, please run $COMMAND and inject it's output into the dump 
> >> output." For some of the data obfuscation we're doing it would be easiest 
> >> if $COMMAND was a perl script instead of SQL, but we could probably 
> >> convert it.
> >> 
> >> Of course, many other folks actually need the ability to just spit out 
> >> specific portions of the dump; I'm hoping we can come up with something 
> >> that supports both concepts.
> >> 
> > 
> > Well, the Unix approach is to use tools that do one thing well to build up 
> > more complex tools. Making pg_dump run some external command to inject 
> > things into the stream seems like the wrong thing given this philosophy. 
> > Use pg_dump to get the bits you want (pre-data, post-data) and sandwich 
> > them around whatever else you want.
> 
> I agree... except for one little niggling concern: If pg_dump is injecting 
> something, then the DDL is being grabbed with a single, consistent snapshot. 
> --pre and --post do not get you that (though we could probably use the new 
> ability to export snapshots to fix that...)

Eh, --pre and --post are pg_restore flags, so you already have a
consistent snapshot.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-08-31 Thread Jim Nasby
On Aug 26, 2011, at 5:23 PM, Andrew Dunstan wrote:
> On 08/26/2011 04:46 PM, Jim Nasby wrote:
>> On Aug 26, 2011, at 12:15 PM, Andrew Dunstan wrote:
>>> I knew there would be some bike-shedding about how we specify these things, 
>>> which is why I haven't written docs yet.
>> While we're debating what shade of yellow to paint the shed...
>> 
>> My actual use case is to be able to be able to "inject" SQL into a 
>> SQL-formatted dump either pre- or post-data (I'm on 8.3, so I don't actually 
>> dump any data; I'm *mostly* emulating the ability to dump data on just 
>> certain tables).
>> 
>> So for what I'm doing, the ideal interface would be a way to tell pg_dump 
>> "When you're done dumping all table structures but before you get to any 
>> constraints, please run $COMMAND and inject it's output into the dump 
>> output." For some of the data obfuscation we're doing it would be easiest if 
>> $COMMAND was a perl script instead of SQL, but we could probably convert it.
>> 
>> Of course, many other folks actually need the ability to just spit out 
>> specific portions of the dump; I'm hoping we can come up with something that 
>> supports both concepts.
>> 
> 
> Well, the Unix approach is to use tools that do one thing well to build up 
> more complex tools. Making pg_dump run some external command to inject things 
> into the stream seems like the wrong thing given this philosophy. Use pg_dump 
> to get the bits you want (pre-data, post-data) and sandwich them around 
> whatever else you want.

I agree... except for one little niggling concern: If pg_dump is injecting 
something, then the DDL is being grabbed with a single, consistent snapshot. 
--pre and --post do not get you that (though we could probably use the new 
ability to export snapshots to fix that...)

> As for getting data from just certain tables, I just posted a patch for 
> pg_dump to exclude data for certain tables, and we could look at providing a 
> positive as well as a negative filter if there is sufficient demand.

Unfortunately some of the dumped data needs to be sanitized, so that won't work 
unless I can also dump an arbitrary SELECT. But yes, a positive filter would 
definitely be welcome.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] setlocale() and gettext on Windows revisited

2011-08-31 Thread Heikki Linnakangas
Back in January/February, a patch was committed to avoid using libintl's 
version of setlocale:


http://archives.postgresql.org/pgsql-hackers/2011-01/msg02628.php

The comment says it was about a problem with printf() and friends, so I 
wonder, why was that "#undef setlocale" line put inside the larger 
"#ifdef USE_REPL_SNPRINTF" block? If I understand the problem correctly, 
it has nothing to do with our replacement snprintf() function.


Fortunately, we always use the replacement snprintf() code on Windows, 
so there's no user-visible bug here, but if you imagine that we didn't 
USE_REPL_SNPRINTF on Windows, we would still want the "#undef setlocale" 
to take effect, right? I think that block is misplaced.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.conf archive_command example

2011-08-31 Thread Peter Eisentraut
On ons, 2011-08-31 at 11:18 +0200, Cédric Villemain wrote:
> Just a question: can we build a different postgresql.conf for windows
> or do we add a windows command example here as well ?

Well, we could make initdb patch it up, but that might seem excessive.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade automatic testing

2011-08-31 Thread Peter Eisentraut
On tis, 2011-08-30 at 16:25 -0400, Tom Lane wrote:
> So I think that as given, this script is only useful for testing
> pg_upgrade of $currentversion to $currentversion.  Which is surely
> better than no test at all, but it would not for example have caught
> the 8.3 incompatibility that was just reported.

Well, the goal was always current to current version.  Cross-version
testing is obviously important, but will be quite a bit harder.

> How can we improve things here?  I've toyed with the idea of
> installing pg_regress.so so that we can refer to it relative to
> $libdir, but that might be a bit invasive, especially if we were to
> try to back-patch it as far as 8.3. 

Aside from hesitations to backpatch those sorts of changes, it would
effectively prevent us from ever removing anything from the C libraries
used in the regression tests, because we need to keep the symbols around
so that the schema dump can load successfully into the new instance.

I think a solution would have to be one of:

1) pg_upgrade needs a mode to cope with these situations.  It can tell
the user, I upgraded your installation, but some dynamic modules appear
to be missing, you need to sort that out before you can put this back
into use.

2) Design a different test schema to load into the database before
running pg_upgrade.  This would then be a one-line change in the script.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "stored procedures"

2011-08-31 Thread Merlin Moncure
On Wed, Aug 31, 2011 at 9:00 AM, Thom Brown  wrote:
> On 9 May 2011 20:52, Merlin Moncure  wrote:
>> On Mon, May 9, 2011 at 1:41 PM, Bruce Momjian  wrote:
>>> Josh Berkus wrote:
 Peter,

 > I would like to collect some specs on this feature.  So does anyone have
 > links to documentation of existing implementations, or their own spec
 > writeup?  A lot of people appear to have a very clear idea of this
 > concept in their own head, so let's start collecting those.

 Delta between SPs and Functions for PostgreSQL:

 * SPs are executed using CALL or EXECUTE, and not SELECT.

 * SPs do not return a value
 ** optional: SPs *may* have OUT parameters.
>>>
>>> [ Late reply.]
>>>
>>> What is it about stored procedures that would require it not to return a
>>> value or use CALL?  I am trying to understand what part of this is
>>> "procedures" (doesn't return a values, we decided there isn't much value
>>> for that syntax vs. functions), and anonymous transactions.
>>
>> FWICT the sql standard.  The only summary of standard behaviors I can
>> find outside of the standard itself is here:
>> http://farrago.sourceforge.net/design/UserDefinedTypesAndRoutines.html.
>>  Peter's synopsis of how the standard works is murky at best and
>> competing implementations are all over the place...SQL server's
>> 'CALL'  feature is basically what I personally would like to see. It
>> would complement our functions nicely.
>>
>> Procedures return values and are invoked with CALL.  Functions return
>> values and are in-query callable.
>>
>> The fact that 'CALL' is not allowed inside a query seems to make it
>> pretty darn convenient to make the additional distinction of allowing
>> transactional control statements there and not in functions.  You
>> don't *have* to allow transactional control statements and could offer
>> this feature as an essentially syntax sugar enhancement, but then run
>> the risk of boxing yourself out of a useful properties of this feature
>> later on because of backwards compatibility issues (in particular, the
>> assumption that your are in a running transaction in the procedure
>> body).
>
> I've seen no mention of SQL/PSM.  Isn't all of this covered by that?

That's the 64k$ question. My take is that 'CALL' doesn't implicitly
set up a transaction state, and a proper PSM implementation would
allow transaction control mid-procedure.  Functions will always be
called in-transaction, since there is no way I can see to execute a
function except from an outer query (or the special case of DO).  I
think there's zero point in making CALL work without dealing with the
transaction issue -- in fact it could end up being a huge mistake to
do so.

Pavel's PSM implementation (see:
http://www.pgsql.cz/index.php/SQL/PSM_Manual) works under the
constraints of pg's understanding of what functions should and should
not be allowed to do.   It allows creation of PSM *functions* --
that's all.

IMNSHO, stored procedures should run in-process, and the execution
engine needs to be modified to not automatically spin up a transaction
and a snapshot when running them, but most allow a pl to do that at
appropriate times.  plpgsql and the other pls fwict make no
assumptions that strictly invalidate their use in that fashion outside
of some unfortunate ambiguity issues around 'begin', 'end', etc.  If
there is no current transaction, each statement should create one if
it's determined that the statement is interfacing with the sql engine
in such a way a transaction would be required, and immediately tear it
down, exactly as if an sql script was run inside the backend.  The SPI
interface can probably work 'as-is', and should probably return an
error if you arrive into certain functions while not in transaction.

An out of process, autonomous transaction type implementation should
probably not sit under stored procedures for a number of reasons --
mainly that it's going to expose too many implementation details to
the user.  For example, does a SP heavy app have 2*N running
processes?  Or do we slot them into a defined number of backends for
that purpose? Yuck & yuck.  I like the AT feature, and kludge it
frequently via dblink, but it's a solution for a different set of
problems.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: SP-GiST, Space-Partitioned GiST

2011-08-31 Thread Oleg Bartunov

Hi there,

attached is our WIP-patch for 9.2 development source tree, which provides
implementation of SP-GiST (prototype was presented at PGCon-2011, see
http://www.pgcon.org/2011/schedule/events/309.en.html and presentation
for details) as a core feature.  Main differences from prototype version:

1. Now it's part of pg core, not contrib module
2. It provides more operations for quadtree and suffix tree
3. It uses clustering algorithm of nodes on disk and has much better
utilization of disk space. Fillfactor is supported
4. Some corner cases were eliminated
5. It provides support for concurency and recovery (inserts are
logged, supports for deletes, and log replay will be added really
soon)

So, now code contains almost all possible overhead of production code
and we ask hackers to test performance on real data sets. We expect
the same performance for random data (since almost no overlaps) and
much better performance on real-life data, plus much better index
creation time. Also, we appreciate your comments and suggestions about
API.

Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

spgist_patch-0.84.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sha1, sha2 functions into core?

2011-08-31 Thread Ross J. Reedstrom
On Fri, Aug 12, 2011 at 10:14:58PM +0300, Marko Kreen wrote:
> On Thu, Aug 11, 2011 at 5:46 PM, Tom Lane  wrote:
> > Marko Kreen  writes:
> >> On Wed, Aug 10, 2011 at 9:19 PM, Tom Lane  wrote:
> >>> ... which this approach would create, because digest() isn't restricted
> >>> to just those algorithms.  I think it'd be better to just invent two
> >>> new functions, which also avoids issues for applications that currently
> >>> expect the digest functions to be installed in pgcrypto's schema.
> >
> >> I would suggest digest() with fixed list of algorithms: md5, sha1, sha2.
> >
> >> The uncommon/obsolete algorithms that can be used
> >> from digest() if compiled with openssl, are not something we
> >> need to worry over.  In fact we have never "supported" them,
> >> as no testing has been done.
> >
> > Hmm ... they may be untested by us, but I feel sure that if we remove
> > that functionality from pgcrypto, *somebody* is gonna complain.
> 
> If you dont want to break digest() but do not want such behaviour in core,
> we could go with hash(data, algo) that has fixed number of digests,
> but also couple non-cryptographic hashes like crc32, lookup2/3.
> This would also fix the problem of people using hashtext() in user code.
 
Hmm, this thread seems to have petered out without a conclusion. Just
wanted to comment that there _are_ non-password storage uses for these
digests: I use them in a context of storing large files in a bytea
column, as a means to doing data deduplication, and avoiding pushing
files from clients to server and back.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer & Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian

FYI, I am working with depesz on IM right now and will report back when
we have a cause of the bug.  FYI, I was without electric power for 53
hours, which is why I am late in replying to this report.

---

daveg wrote:
> On Mon, Aug 29, 2011 at 07:49:24PM +0200, hubert depesz lubaczewski wrote:
> > On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:
> > vacuumdb: vacuuming of database "etsy_v2" failed: ERROR:  could not access 
> > status of transaction 3429738606
> > DETAIL:  Could not open file "pg_clog/0CC6": No such file or directory.
> > 
> > Interestingly.
> > 
> > In old dir there is pg_clog directory with files:
> > 0AC0 .. 0DAF (including 0CC6, size 262144)
> > but new pg_clog has only:
> > 0D2F .. 0DB0
> > 
> > File content - nearly all files that exist in both places are the same, 
> > with exception of 2 newest ones in new datadir:
> > 3c5122f3e80851735c19522065a2d12a  0DAF
> > 8651fc2b9fa3d27cfb5b496165cead68  0DB0
> > 
> > 0DB0 doesn't exist in old, and 0DAF has different md5sum: 
> > 7d48996c762d6a10f8eda88ae766c5dd
> > 
> > one more thing. I did select count(*) from transactions and it worked.
> > 
> > that's about it. I can probably copy over files from old datadir to new (in
> > pg_clog/), and will be happy to do it, but I'll wait for your call - retry 
> > with
> > copies files might destroy some evidence.
> 
> I had this same thing happen this Saturday just past and my client had to
> restore the whole 2+ TB instance from the previous days pg_dumps.
> I had been thinking that perhaps I did something wrong in setting up or
> running the upgrade, but had not found it yet. Now that I see Hubert has
> the same problem it is starting to look like pg_upgrade can eat all your
> data.
> 
> After running pg_upgrade apparently successfully and analyzeing all the
> tables we restarted the production workload and started getting errors:
> 
> 2011-08-27 04:18:34.015  12337  c06  postgres  ERROR:  could not access 
> status of transaction 2923961093
> 2011-08-27 04:18:34.015  12337  c06  postgres  DETAIL:  Could not open file 
> "pg_clog/0AE4": No such file or directory.
> 2011-08-27 04:18:34.015  12337  c06  postgres  STATEMENT:  analyze 
> public.b_pxx;
> 
> On examination the pg_clog directory contained on two files timestamped
> after the startup of the new cluster with 9.0.4. Other hosts that upgraded
> successfully had numerous files in pg_clog dating back a few days. So it
> appears that all the clog files went missing during the upgrade somehow.
> a
> This happened upgrading from 8.4.7 to 9.0.4, with a brief session in between
> at 8.4.8. We have upgraded several hosts to 9.0.4 successfully previously.
> 
> -dg
> 
> -- 
> David Gould   da...@sonic.net  510 536 1443510 282 0869
> If simplicity worked, the world would be overrun with insects.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] limit in subquery causes poor selectivity estimation

2011-08-31 Thread Tom Lane
Robert Haas  writes:
> On Wed, Aug 31, 2011 at 6:22 AM, Peter Eisentraut  wrote:
>> I liked the old one better. ;-)

> AFAICS, those plans are identical, except for a minor difference in
> the cost of scanning test2.

The point is that the estimate of the result size is worse in 8.4.8.

I am not, however, convinced that 8.4.7 was actually smarter ... it
may have been getting the right answer for the wrong reason.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] limit in subquery causes poor selectivity estimation

2011-08-31 Thread Robert Haas
On Wed, Aug 31, 2011 at 6:22 AM, Peter Eisentraut  wrote:
> On lör, 2011-08-27 at 13:32 -0400, Tom Lane wrote:
>> > EXPLAIN SELECT * FROM test1  WHERE sha1 in (SELECT sha1 FROM test2
>> LIMIT 200);
>>
>> > Here, however, it has apparently not passed this knowledge through
>> the
>> > LIMIT.
>>
>> The LIMIT prevents the subquery from being flattened entirely, ie we
>> don't have just "test1 SEMI JOIN test2" but "test1 SEMI JOIN (SELECT *
>> FROM test2 LIMIT 200)".  If you look at examine_variable in selfuncs.c
>> you'll note that it punts for Vars coming from unflattened subqueries.
>>
>> > So what's up with that?  Just a case of, we haven't thought about
>> > covering this case yet, or are there larger problems?
>>
>> The larger problem is that if a subquery didn't get flattened, it's
>> often because it's got LIMIT, or GROUP BY, or some similar clause that
>> makes it highly suspect whether the statistics available for the table
>> column are reasonable to use for the subquery outputs.  It wouldn't be
>> that hard to grab the stats for test2.sha1, but then how do you want
>> to adjust them to reflect the LIMIT?
>
> It turns out that this is a regression introduced in 8.4.8; the same
> topic is also being discussed in
>
> http://archives.postgresql.org/pgsql-performance/2011-08/msg00248.php
>
> and
>
> http://archives.postgresql.org/pgsql-general/2011-08/msg00995.php
>
> This is the (previously posted) plan with 8.4.8:
>
>                                    QUERY PLAN
> --
>  Hash Join  (cost=10.60..34.35 rows=500 width=31)
>   Hash Cond: (test1.sha1 = test2.sha1)
>   ->  Seq Scan on test1  (cost=0.00..18.00 rows=1000 width=31)
>   ->  Hash  (cost=8.10..8.10 rows=200 width=32)
>         ->  HashAggregate  (cost=6.10..8.10 rows=200 width=32)
>               ->  Limit  (cost=0.00..3.60 rows=200 width=21)
>                     ->  Seq Scan on test2  (cost=0.00..18.01 rows=1001 
> width=21)
>
> And this is the plan with 8.4.7:
>
>                                    QUERY PLAN
> --
>  Hash Join  (cost=10.80..34.55 rows=200 width=31)
>   Hash Cond: (test1.sha1 = test2.sha1)
>   ->  Seq Scan on test1  (cost=0.00..18.00 rows=1000 width=31)
>   ->  Hash  (cost=8.30..8.30 rows=200 width=32)
>         ->  HashAggregate  (cost=6.30..8.30 rows=200 width=32)
>               ->  Limit  (cost=0.00..3.80 rows=200 width=21)
>                     ->  Seq Scan on test2  (cost=0.00..19.01 rows=1001 
> width=21)
>
> I liked the old one better. ;-)

AFAICS, those plans are identical, except for a minor difference in
the cost of scanning test2.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tab stop in README

2011-08-31 Thread YAMAMOTO Takashi
> On Sun, Aug 28, 2011 at 8:28 PM, YAMAMOTO Takashi
>  wrote:
>>> On men, 2011-08-22 at 04:09 +, YAMAMOTO Takashi wrote:
 i know that postgresql uses ts=4 for C source code.
 but how about documatation?
>>>
>>> I'd say ideally don't use any tabs at all.
>>
>> i agree.
>>
>>> It appears to be geared for ts=4.  Could you send a patch or other
>>> indication for what you think needs changing?
>>
>> attached.
> 
> I'm confused by this patch, because it doesn't seem to get rid of all
> the tabs in the file.  Nor does it seem to replace tabs with spaces.
> It looks like it's just randomly removing and adding tabs in various
> places.

the patch just fixes indent for ts=4, keep using tabs.

should i run "expand -t4" and send the result?

YAMAMOTO Takashi

> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread hubert depesz lubaczewski
On Wed, Aug 31, 2011 at 01:23:05PM -0400, Bruce Momjian wrote:
> Can you get me the 9.0.X pg_class.relfrozenxid for the toast and heap
> tables involved?

Sure:

=# select oid::regclass, relfrozenxid from pg_class  where relname in 
('transactions', 'pg_toast_106668498');
 oid | relfrozenxid 
-+--
 pg_toast.pg_toast_106668498 |   3673553926
 transactions|   3623560321
(2 rows)

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
Alvaro Herrera wrote:
> > > I don't understand the pg_upgrade code here.  It is setting the
> > > datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,
> > > 
> > > /* set pg_class.relfrozenxid */
> > > PQclear(executeQueryOrDie(conn,
> > >   "UPDATE   pg_catalog.pg_class "
> > >   "SET  relfrozenxid = '%u' "
> > > /* only heap and TOAST are vacuumed */
> > >   "WHERErelkind IN ('r', 't')",
> > >   old_cluster.controldata.chkpnt_nxtxid));
> > > 
> > > but I don't see why this is safe.  I mean, surely the previous
> > > vacuum might have been a lot earlier than that.  Are these values reset
> > > to more correct values (i.e. older ones) later somehow?  My question is,
> > > why isn't the new cluster completely screwed?
> > 
> > Have you looked at my pg_upgrade presentation?
> > 
> > http://momjian.us/main/presentations/features.html#pg_upgrade
> 
> I just did, but it doesn't explain this in much detail.  (In any case I
> don't think we should be relying in a PDF presentation to explain the
> inner pg_upgrade details.  I think we should rely more on the
> IMPLEMENTATION file rather than your PDF ... amusingly that file doesn't
> mention the frozenxids.)
> 
> > This query happens after we have done a VACUUM FREEEZE on an empty
> > cluster.
> 
> Oh, so it only affects the databases that initdb created, right?
> The other ones are not even created yet.

Right.

> > pg_dump --binary-upgrade will dump out the proper relfrozen xids for
> > every object that gets its file system files copied or linked.
> 
> Okay.  I assume that between the moment you copy the pg_clog files from
> the old server, and the moment you do the UPDATEs on pg_class and
> pg_database, there is no chance for vacuum to run and remove clog
> segments.

Right, we disable it, and had a long discussion about it.  We actually
start the server with:

"-c autovacuum=off -c autovacuum_freeze_max_age=20",

> Still, it seems to me that this coding makes Min(datfrozenxid) to go
> backwards, and that's bad news.

Yes, it is odd, but I don't see another option.  Remember the problem
with xid wrap-around --- we really are defining two different xid eras,
and have to freeze to make that possible.

> > > I wonder if pg_upgrade shouldn't be doing the conservative thing here,
> > > which AFAICT would be to set all frozenxid values as furthest in the
> > > past as possible (without causing a shutdown-due-to-wraparound, and
> > > maybe without causing autovacuum to enter emergency mode either).
> > 
> > I already get complaints about requiring an "analyze" run after the
> > upgrade --- this would make it much worse.  In fact I have to look into
> > upgrading optimizer statistics someday.
> 
> Why would it make it worse at all?  It doesn't look to me like it
> wouldn't affect in any way.  The only thing it does, is tell the system
> to keep clog segments around.

It will cause excessive vacuum freezing to happen on startup, I assume.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Alvaro Herrera
Excerpts from Bruce Momjian's message of mié ago 31 13:23:07 -0300 2011:
> Alvaro Herrera wrote:
> > Excerpts from hubert depesz lubaczewski's message of lun ago 29 14:49:24 
> > -0300 2011:
> > > On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:
> > > > On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski 
> > > > wrote:
> > > > > On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
> > > > > > 
> > > > > > OK, this was very helpful.  I found out that there is a bug in 
> > > > > > current
> > > > > > 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded 
> > > > > > temp
> > > > > > tables.  (The bug is not in any released version of pg_upgrade.)  
> > > > > > The
> > > > > > attached, applied patches should fix it for you.  I assume you are
> > > > > > running 9.0.X, and not 9.0.4.
> > > > > 
> > > > > pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
> > > > 
> > > > vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.
> > > > 
> > > > After long vacuum I got:
> > > > INFO:  vacuuming "pg_toast.pg_toast_106668498"
> > > > vacuumdb: vacuuming of database "etsy_v2" failed: ERROR:  could not 
> > > > access status of transaction 3429738606
> > > > DETAIL:  Could not open file "pg_clog/0CC6": No such file or directory.
> > 
> > I don't understand the pg_upgrade code here.  It is setting the
> > datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,
> > 
> > /* set pg_class.relfrozenxid */
> > PQclear(executeQueryOrDie(conn,
> >   "UPDATE   pg_catalog.pg_class "
> >   "SET  relfrozenxid = '%u' "
> > /* only heap and TOAST are vacuumed */
> >   "WHERErelkind IN ('r', 't')",
> >   old_cluster.controldata.chkpnt_nxtxid));
> > 
> > but I don't see why this is safe.  I mean, surely the previous
> > vacuum might have been a lot earlier than that.  Are these values reset
> > to more correct values (i.e. older ones) later somehow?  My question is,
> > why isn't the new cluster completely screwed?
> 
> Have you looked at my pg_upgrade presentation?
> 
> http://momjian.us/main/presentations/features.html#pg_upgrade

I just did, but it doesn't explain this in much detail.  (In any case I
don't think we should be relying in a PDF presentation to explain the
inner pg_upgrade details.  I think we should rely more on the
IMPLEMENTATION file rather than your PDF ... amusingly that file doesn't
mention the frozenxids.)

> This query happens after we have done a VACUUM FREEEZE on an empty
> cluster.

Oh, so it only affects the databases that initdb created, right?
The other ones are not even created yet.

> pg_dump --binary-upgrade will dump out the proper relfrozen xids for
> every object that gets its file system files copied or linked.

Okay.  I assume that between the moment you copy the pg_clog files from
the old server, and the moment you do the UPDATEs on pg_class and
pg_database, there is no chance for vacuum to run and remove clog
segments.

Still, it seems to me that this coding makes Min(datfrozenxid) to go
backwards, and that's bad news.

> > I wonder if pg_upgrade shouldn't be doing the conservative thing here,
> > which AFAICT would be to set all frozenxid values as furthest in the
> > past as possible (without causing a shutdown-due-to-wraparound, and
> > maybe without causing autovacuum to enter emergency mode either).
> 
> I already get complaints about requiring an "analyze" run after the
> upgrade --- this would make it much worse.  In fact I have to look into
> upgrading optimizer statistics someday.

Why would it make it worse at all?  It doesn't look to me like it
wouldn't affect in any way.  The only thing it does, is tell the system
to keep clog segments around.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
hubert depesz lubaczewski wrote:
> On Wed, Aug 31, 2011 at 12:16:03PM -0400, Bruce Momjian wrote:
> > hubert depesz lubaczewski wrote:
> > > On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
> > > > 
> > > > OK, this was very helpful.  I found out that there is a bug in current
> > > > 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
> > > > tables.  (The bug is not in any released version of pg_upgrade.)  The
> > > > attached, applied patches should fix it for you.  I assume you are
> > > > running 9.0.X, and not 9.0.4.
> > > 
> > > pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
> > > 
> > > will keep you posted.
> > 
> > FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas. 
> > Users can either wait for 9.1 RC2 or Final, or use the patch I posted. 
> > The bug is not in 9.0.4 and will not be in 9.0.5.
> 
> I assume you mean the bug that caused pg_upgrade to fail.

Yes.

> But there still is (existing in 9.0.4 too) bug which causes vacuum to
> fail.

Yes.  We need to find the cause of that new bug.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
hubert depesz lubaczewski wrote:
> INFO:  vacuuming "pg_toast.pg_toast_106668498"
> vacuumdb: vacuuming of database "etsy_v2" failed: ERROR:  could not access 
> status of transaction 3429738606
> DETAIL:  Could not open file "pg_clog/0CC6": No such file or directory.
> 
> Interestingly.
> 
> In old dir there is pg_clog directory with files:
> 0AC0 .. 0DAF (including 0CC6, size 262144)
> but new pg_clog has only:
> 0D2F .. 0DB0
> 
> File content - nearly all files that exist in both places are the same, with 
> exception of 2 newest ones in new datadir:
> 3c5122f3e80851735c19522065a2d12a  0DAF
> 8651fc2b9fa3d27cfb5b496165cead68  0DB0
> 
> 0DB0 doesn't exist in old, and 0DAF has different md5sum: 
> 7d48996c762d6a10f8eda88ae766c5dd
> 
> one more thing. I did select count(*) from transactions and it worked.

Count(*) worked because it didn't access any of the long/toasted values.

> that's about it. I can probably copy over files from old datadir to new (in
> pg_clog/), and will be happy to do it, but I'll wait for your call - retry 
> with
> copies files might destroy some evidence.

You can safely copy over any of the clog files that exist in the old
cluster but not in the new one, but another vacuum is likely to remove
those files again.  :-(

This sure sounds like a variation on the pg_upgrade/toast bug we fixed
in 9.0.4:

http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix

Can you get me the 9.0.X pg_class.relfrozenxid for the toast and heap
tables involved?

FYI, this is what pg_dump --binary-upgrade does to preserve the
relfrozenxids:

-- For binary upgrade, set heap's relfrozenxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '702'
WHERE oid = 'test'::pg_catalog.regclass;

-- For binary upgrade, set toast's relfrozenxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '702'
WHERE oid = '16434';

We also preserve the pg_class oids with:

-- For binary upgrade, must preserve pg_class oids
SELECT 
binary_upgrade.set_next_heap_pg_class_oid('16431'::pg_catalog.oid);
SELECT 
binary_upgrade.set_next_toast_pg_class_oid('16434'::pg_catalog.oid);
SELECT 
binary_upgrade.set_next_index_pg_class_oid('16436'::pg_catalog.oid);

The question is whether this is working, and if not, why not?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread hubert depesz lubaczewski
On Wed, Aug 31, 2011 at 12:16:03PM -0400, Bruce Momjian wrote:
> hubert depesz lubaczewski wrote:
> > On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
> > > 
> > > OK, this was very helpful.  I found out that there is a bug in current
> > > 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
> > > tables.  (The bug is not in any released version of pg_upgrade.)  The
> > > attached, applied patches should fix it for you.  I assume you are
> > > running 9.0.X, and not 9.0.4.
> > 
> > pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
> > 
> > will keep you posted.
> 
> FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas. 
> Users can either wait for 9.1 RC2 or Final, or use the patch I posted. 
> The bug is not in 9.0.4 and will not be in 9.0.5.

I assume you mean the bug that caused pg_upgrade to fail.

But there still is (existing in 9.0.4 too) bug which causes vacuum to
fail.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Robert Haas
On Wed, Aug 31, 2011 at 12:16 PM, Bruce Momjian  wrote:
> hubert depesz lubaczewski wrote:
>> On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
>> >
>> > OK, this was very helpful.  I found out that there is a bug in current
>> > 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
>> > tables.  (The bug is not in any released version of pg_upgrade.)  The
>> > attached, applied patches should fix it for you.  I assume you are
>> > running 9.0.X, and not 9.0.4.
>>
>> pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
>>
>> will keep you posted.
>
> FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas.
> Users can either wait for 9.1 RC2 or Final, or use the patch I posted.
> The bug is not in 9.0.4 and will not be in 9.0.5.

Based on subsequent discussion on this thread, it sounds like
something is still broken.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
Alvaro Herrera wrote:
> Excerpts from hubert depesz lubaczewski's message of lun ago 29 14:49:24 
> -0300 2011:
> > On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:
> > > On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:
> > > > On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
> > > > > 
> > > > > OK, this was very helpful.  I found out that there is a bug in current
> > > > > 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
> > > > > tables.  (The bug is not in any released version of pg_upgrade.)  The
> > > > > attached, applied patches should fix it for you.  I assume you are
> > > > > running 9.0.X, and not 9.0.4.
> > > > 
> > > > pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
> > > 
> > > vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.
> > > 
> > > After long vacuum I got:
> > > INFO:  vacuuming "pg_toast.pg_toast_106668498"
> > > vacuumdb: vacuuming of database "etsy_v2" failed: ERROR:  could not 
> > > access status of transaction 3429738606
> > > DETAIL:  Could not open file "pg_clog/0CC6": No such file or directory.
> 
> I don't understand the pg_upgrade code here.  It is setting the
> datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,
> 
> /* set pg_class.relfrozenxid */
> PQclear(executeQueryOrDie(conn,
>   "UPDATE   pg_catalog.pg_class "
>   "SET  relfrozenxid = '%u' "
> /* only heap and TOAST are vacuumed */
>   "WHERErelkind IN ('r', 't')",
>   old_cluster.controldata.chkpnt_nxtxid));
> 
> but I don't see why this is safe.  I mean, surely the previous
> vacuum might have been a lot earlier than that.  Are these values reset
> to more correct values (i.e. older ones) later somehow?  My question is,
> why isn't the new cluster completely screwed?

Have you looked at my pg_upgrade presentation?

http://momjian.us/main/presentations/features.html#pg_upgrade

This query happens after we have done a VACUUM FREEEZE on an empty
cluster.

pg_dump --binary-upgrade will dump out the proper relfrozen xids for
every object that gets its file system files copied or linked.

> I wonder if pg_upgrade shouldn't be doing the conservative thing here,
> which AFAICT would be to set all frozenxid values as furthest in the
> past as possible (without causing a shutdown-due-to-wraparound, and
> maybe without causing autovacuum to enter emergency mode either).

I already get complaints about requiring an "analyze" run after the
upgrade --- this would make it much worse.  In fact I have to look into
upgrading optimizer statistics someday.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread Bruce Momjian
hubert depesz lubaczewski wrote:
> On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
> > 
> > OK, this was very helpful.  I found out that there is a bug in current
> > 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
> > tables.  (The bug is not in any released version of pg_upgrade.)  The
> > attached, applied patches should fix it for you.  I assume you are
> > running 9.0.X, and not 9.0.4.
> 
> pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
> 
> will keep you posted.

FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas. 
Users can either wait for 9.1 RC2 or Final, or use the patch I posted. 
The bug is not in 9.0.4 and will not be in 9.0.5.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] casting between range types

2011-08-31 Thread Robert Haas
On Wed, Aug 31, 2011 at 11:36 AM, Heikki Linnakangas
 wrote:
> On 31.08.2011 18:09, Jeff Davis wrote:
>> On Wed, 2011-08-31 at 09:20 +0300, Heikki Linnakangas wrote:
>>>
>>> On 31.08.2011 09:14, Jeff Davis wrote:

 First, a range is really a set. So if we take '[1,10)'::int4range and
 cast that to numrange, we end up moving from a set of exactly 9 elements
 to a set of an infinite number of elements. Going the other way is
 probably worse.
>> ...
>>
>>> Can you only provide casts that make sense, like between int4 and
>>> numeric range types, and leave out the ones that don't?
>>
>> There are certainly some casts that make sense, like
>> int4range->int8range. Do you think int4range->numrange also makes sense?
>
> Not sure. It depends on whether you think of '[1,8]'::int4range as a finite
> set of the integers between 1 and 8, or as a continuous range from 1 to 8. I
> don't see harm in providing explicit casts like that, but I would be very
> conservative with implicit and assignment casts.

+1 for that approach.  It's really annoying when you can't explicitly
cast between data types, and it might be that you just allow coercion
via I/O functions since it's unlikely to be a performance-critical
operation.  But I can't see why you would want any implicit or
assignment casts at all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rename index fields bug

2011-08-31 Thread Andrew Dunstan



On 08/31/2011 11:24 AM, Heikki Linnakangas wrote:

On 31.08.2011 18:20, Andrew Dunstan wrote:

I've just stumbled across this, which appears to be a regression from
8.4 that is present in 9.0 and master:

andrew=# create table foo (x int primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
"foo_pkey" for table "foo"
CREATE TABLE
andrew=# alter table foo rename x to y;
ALTER TABLE
andrew=# select attname from pg_attribute where attrelid =
'foo_pkey'::regclass;
attname
-
x
(1 row)

In 8.4 the index attribute is renamed correctly.


That was intentional:

commit c176e12c63844c0a2f3f8c568c3fe6c57d15
Author: Tom Lane 
Date:   Wed Dec 23 16:43:43 2009 +

Remove code that attempted to rename index columns to keep them in 
sync with
their underlying table columns.  That code was not bright enough 
to cope with
collision situations (ie, new name conflicts with some other 
column of the
index).  Since there is no functional reason to do this at all, 
trying to

upgrade the logic to be bulletproof doesn't seem worth the trouble.

This change means that both the index name and the column names of 
an index
are set when it's created, and won't be automatically changed when 
the
underlying table columns are renamed.  Neatnik DBAs are still free 
to rename

them manually, of course.




Oh, I see.  Thanks.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] casting between range types

2011-08-31 Thread Heikki Linnakangas

On 31.08.2011 18:09, Jeff Davis wrote:

On Wed, 2011-08-31 at 09:20 +0300, Heikki Linnakangas wrote:

On 31.08.2011 09:14, Jeff Davis wrote:

First, a range is really a set. So if we take '[1,10)'::int4range and
cast that to numrange, we end up moving from a set of exactly 9 elements
to a set of an infinite number of elements. Going the other way is
probably worse.


...


Can you only provide casts that make sense, like between int4 and
numeric range types, and leave out the ones that don't?


There are certainly some casts that make sense, like
int4range->int8range. Do you think int4range->numrange also makes sense?


Not sure. It depends on whether you think of '[1,8]'::int4range as a 
finite set of the integers between 1 and 8, or as a continuous range 
from 1 to 8. I don't see harm in providing explicit casts like that, but 
I would be very conservative with implicit and assignment casts.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rename index fields bug

2011-08-31 Thread Heikki Linnakangas

On 31.08.2011 18:20, Andrew Dunstan wrote:

I've just stumbled across this, which appears to be a regression from
8.4 that is present in 9.0 and master:

andrew=# create table foo (x int primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
"foo_pkey" for table "foo"
CREATE TABLE
andrew=# alter table foo rename x to y;
ALTER TABLE
andrew=# select attname from pg_attribute where attrelid =
'foo_pkey'::regclass;
attname
-
x
(1 row)

In 8.4 the index attribute is renamed correctly.


That was intentional:

commit c176e12c63844c0a2f3f8c568c3fe6c57d15
Author: Tom Lane 
Date:   Wed Dec 23 16:43:43 2009 +

Remove code that attempted to rename index columns to keep them in 
sync with
their underlying table columns.  That code was not bright enough to 
cope with
collision situations (ie, new name conflicts with some other column 
of the
index).  Since there is no functional reason to do this at all, 
trying to

upgrade the logic to be bulletproof doesn't seem worth the trouble.

This change means that both the index name and the column names of 
an index

are set when it's created, and won't be automatically changed when the
underlying table columns are renamed.  Neatnik DBAs are still free 
to rename

them manually, of course.



--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] rename index fields bug

2011-08-31 Thread Andrew Dunstan


I've just stumbled across this, which appears to be a regression from 
8.4 that is present in 9.0 and master:


   andrew=# create table foo (x int primary key);
   NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
   "foo_pkey" for table "foo"
   CREATE TABLE
   andrew=# alter table foo rename x to y;
   ALTER TABLE
   andrew=# select attname from pg_attribute where attrelid =
   'foo_pkey'::regclass;
 attname
   -
 x
   (1 row)

In 8.4 the index attribute is renamed correctly.

This only came to light because it caused a londiste failure, making 
londiste think that there wasn't a key field. Arguably londiste should 
be using pg_index.indkey, but this should still work right.



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] casting between range types

2011-08-31 Thread Jeff Davis
On Wed, 2011-08-31 at 09:20 +0300, Heikki Linnakangas wrote:
> On 31.08.2011 09:14, Jeff Davis wrote:
> > First, a range is really a set. So if we take '[1,10)'::int4range and
> > cast that to numrange, we end up moving from a set of exactly 9 elements
> > to a set of an infinite number of elements. Going the other way is
> > probably worse.

...

> Can you only provide casts that make sense, like between int4 and 
> numeric range types, and leave out the ones that don't?

There are certainly some casts that make sense, like
int4range->int8range. Do you think int4range->numrange also makes sense?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comparing two PostgreSQL databases -- order of pg_dump output

2011-08-31 Thread Joe Abbate

On 08/31/2011 10:17 AM, Tom Lane wrote:

Short of that sort of anal-retentiveness, there are going to be cases
where the dump order is a bit unpredictable.  IMO what we need is a
reasonable compromise between verbosity and uniqueness, such that in
normal cases (ie, where you *didn't* intentionally create near-identical
functions in different schemas) you get a unique ordering.  To get to
that, somebody's got to go through all the tag writing code and identify
where the trouble spots are.  So far we've heard triggers and operators
nominated ... what else?


So far, for Pyrseas, I've tested aggregates, casts, constraint triggers, 
conversions, domains, functions, indexes, languages, operators, rules, 
schemas, sequences, tables (including check constraints, primary keys, 
foreign keys, unique constraints and inherited tables), triggers, types 
(base and composite), views and comments on the various objects.  I'll 
be testing operator classes and operator families in the coming weeks. 
So far, triggers and operators are the only ones that have caused an 
issue when using the technique suggested by Jaime (pg_dump -Fc followed 
by pg_restore -l).  Functions also caused problems in the plain text 
pg_dump, e.g., because funcx(geography) sorts after funcx(geometry) if 
the latter is created first.


Joe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comparing two PostgreSQL databases -- order of pg_dump output

2011-08-31 Thread Tom Lane
Peter Eisentraut  writes:
> On tis, 2011-08-30 at 18:07 -0400, Tom Lane wrote:
>> Yeah, we've been around on that before.  pg_dump does actually sort the
>> output items (modulo dependency requirements), but it sorts by the same
>> "tag" values that are printed by pg_restore -l, and those aren't currently
>> designed to be unique.  It's not too clear if we could get away with
>> changing the definitions of the tag strings.

> It's a bit strange that the tag for a trigger is "name" but the tag for
> the trigger's comment is "name ON table".  Not having the table name in
> the trigger tag sounds wrong, because it makes the tag not very useful
> for selecting the trigger from the TOC.

I don't think changing that would be a problem.  What gets unpleasant is
trying to guarantee that pg_dump object tags are unconditionally unique.
That would, for example, mean that every argument type of every function
would have to be written out fully-schema-qualified.

Short of that sort of anal-retentiveness, there are going to be cases
where the dump order is a bit unpredictable.  IMO what we need is a
reasonable compromise between verbosity and uniqueness, such that in
normal cases (ie, where you *didn't* intentionally create near-identical
functions in different schemas) you get a unique ordering.  To get to
that, somebody's got to go through all the tag writing code and identify
where the trouble spots are.  So far we've heard triggers and operators
nominated ... what else?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "stored procedures"

2011-08-31 Thread Thom Brown
On 9 May 2011 20:52, Merlin Moncure  wrote:
> On Mon, May 9, 2011 at 1:41 PM, Bruce Momjian  wrote:
>> Josh Berkus wrote:
>>> Peter,
>>>
>>> > I would like to collect some specs on this feature.  So does anyone have
>>> > links to documentation of existing implementations, or their own spec
>>> > writeup?  A lot of people appear to have a very clear idea of this
>>> > concept in their own head, so let's start collecting those.
>>>
>>> Delta between SPs and Functions for PostgreSQL:
>>>
>>> * SPs are executed using CALL or EXECUTE, and not SELECT.
>>>
>>> * SPs do not return a value
>>> ** optional: SPs *may* have OUT parameters.
>>
>> [ Late reply.]
>>
>> What is it about stored procedures that would require it not to return a
>> value or use CALL?  I am trying to understand what part of this is
>> "procedures" (doesn't return a values, we decided there isn't much value
>> for that syntax vs. functions), and anonymous transactions.
>
> FWICT the sql standard.  The only summary of standard behaviors I can
> find outside of the standard itself is here:
> http://farrago.sourceforge.net/design/UserDefinedTypesAndRoutines.html.
>  Peter's synopsis of how the standard works is murky at best and
> competing implementations are all over the place...SQL server's
> 'CALL'  feature is basically what I personally would like to see. It
> would complement our functions nicely.
>
> Procedures return values and are invoked with CALL.  Functions return
> values and are in-query callable.
>
> The fact that 'CALL' is not allowed inside a query seems to make it
> pretty darn convenient to make the additional distinction of allowing
> transactional control statements there and not in functions.  You
> don't *have* to allow transactional control statements and could offer
> this feature as an essentially syntax sugar enhancement, but then run
> the risk of boxing yourself out of a useful properties of this feature
> later on because of backwards compatibility issues (in particular, the
> assumption that your are in a running transaction in the procedure
> body).

I've seen no mention of SQL/PSM.  Isn't all of this covered by that?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] dblink make fails under postgresql 8.4.4 on mac osx 10.4.11

2011-08-31 Thread Gary Merkel
Having trouble installing dblink under PostgreSQL 8.4.4 on MAC OS X 10.4.11

Running make gives the following error:



sed 's,MODULE_PATHNAME,$libdir/dblink,g' dblink.sql.in >dblink.sql

gcc -no-cpp-precomp -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
-I../../src/interfaces/libpq -I. -I../../src/include   -c -o dblink.o
dblink.c

dblink.c: In function 'get_pkey_attnames':

dblink.c:1698: error: 'SnapshotNow' undeclared (first use in this function)

dblink.c:1698: error: (Each undeclared identifier is reported only once

dblink.c:1698: error: for each function it appears in.)

make: *** [dblink.o] Error 1



Does anyone know a fix to this?

Thanks for any help.

Gary


Re: [HACKERS] Bogus nestloop join estimate, ignores WHERE clause

2011-08-31 Thread Marti Raudsepp
On Wed, Aug 31, 2011 at 16:34, Peter Eisentraut  wrote:
> On ons, 2011-08-31 at 15:38 +0300, Marti Raudsepp wrote:
>> I'm getting really surprising planner estimates for a query that's
>> joining another table via a varchar field. All of this was tested on
>> PostgreSQL 8.4.8, 9.0.4 and 9.1rc1.
>
> By any chance, did it work better in 8.4.7?

No. Estimates on 8.4.7 are pretty much the same (139820, 139820 and 9455)

(I built and installed 8.4.7 with a clean database)

Regards,
Marti Raudsepp
voicecom.ee

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus nestloop join estimate, ignores WHERE clause

2011-08-31 Thread Peter Eisentraut
On ons, 2011-08-31 at 15:38 +0300, Marti Raudsepp wrote:
> I'm getting really surprising planner estimates for a query that's
> joining another table via a varchar field. All of this was tested on
> PostgreSQL 8.4.8, 9.0.4 and 9.1rc1.

By any chance, did it work better in 8.4.7?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgesql-9.0.4 compile on AIX 6.1 using gcc 4.4.6

2011-08-31 Thread Steve Singer

On 11-08-30 07:58 AM, Weiss, Wilfried wrote:


Hello,

I am just trying to compile postgresql-9.0.4 on AIX 6100-06-03-1048 
using gcc 4.4.6.


Unfortunately that was not all.

There was also:

"[Bug target/46072] AIX linker chokes on debug info for uninitialized 
static variables"


This is an IBM bug in AIX's assembler (as) which causes corrupt object 
code that is crashing when trying to execute it.


As far as I know IBM still not delived a fix for this. It seems that 
they are not interested in this as IBM's xlc is not using the 
assembler to create object code.


Does any one know whether there is an alternate way to compile 
postgresql on AIX 6.1 using gcc???


I appreciate even the smallest hint!



I have compiled 9.0.4 on AIX 5.3 with GCC 4.1.1 without any issues.
(well the regression tests hit an issue on REL9_0_STABLE builds that 
they don't hit with more recent branches but that is due to a makefile 
related issue that I should post about in a different thread.


The buildfarm member grebe 
(http://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grebe&br=HEAD) 
does this.


I do not have access to a AIX 6.1 machine



Regards

WW


http://www.pilkington.com/nsg/disclaimer.htm




[HACKERS] Bogus nestloop join estimate, ignores WHERE clause

2011-08-31 Thread Marti Raudsepp
Hi list!

I'm getting really surprising planner estimates for a query that's
joining another table via a varchar field. All of this was tested on
PostgreSQL 8.4.8, 9.0.4 and 9.1rc1.

The original query is pretty huge, but I managed to shrink it down to this:
SELECT * FROM email_message where email_message.id_code IN (SELECT
id_code FROM client WHERE client_id='83509');

* id_code is an indexed varchar(20) NOT NULL column in both tables
* client_id is the primary key of client.
* There are 149152 rows in email_message and 140975 rows in client
* The most common value in both sides of the join is an empty string.
121970 in email_message and 10753 in client
(Turning the empty values into NULLs helps a little, but still gives
bad estimates)

This is the plan I get:
EXPLAIN SELECT * FROM email_message where email_message.id_code IN
(SELECT id_code FROM client WHERE client_id='83509');
 Nested Loop  (cost=8.28..36.86 rows=139542 width=101)
   ->  HashAggregate  (cost=8.28..8.29 rows=1 width=11)
 ->  Index Scan using client_pkey on client  (cost=0.00..8.28
rows=1 width=11)
   Index Cond: (client_id = 83509)
   ->  Index Scan using email_message_id_code_idx on email_message
(cost=0.00..28.05 rows=41 width=101)
 Index Cond: ((email_message.id_code)::text = (client.id_code)::text)
(6 rows)

This nestloop couldn't possibly generate 139542 rows since the inner
plan is expected to return 1 row and the outer plan 41

After a bit of digging, I figured out that it uses the same estimate
as a semi-join WITHOUT the client_id restriction.
EXPLAIN SELECT * FROM email_message m WHERE EXISTS(SELECT * FROM
client c WHERE m.id_code=c.id_code);
 Nested Loop Semi Join  (cost=0.00..7725.31 rows=139542 width=101)
   ->  Seq Scan on email_message m  (cost=0.00..3966.52 rows=149152 width=101)
   ->  Index Scan using client_id_code_idx1 on client c
(cost=0.00..0.39 rows=1 width=11)
 Index Cond: ((c.id_code)::text = (m.id_code)::text)

For whatever reason, the 1st query completely ignores the fact that
the client_id clause reduces the result count by a large factor.

So I turned this into a simple JOIN and I'm still seeing bad estimates:

EXPLAIN SELECT * FROM email_message JOIN client USING (id_code) WHERE
client_id='83509';
 Nested Loop  (cost=0.00..36.85 rows=9396 width=252)
   ->  Index Scan using client_pkey on client  (cost=0.00..8.28 rows=1
width=162)
 Index Cond: (client_id = 83509)
   ->  Index Scan using email_message_id_code_idx on email_message
(cost=0.00..28.05 rows=41 width=101)
 Index Cond: ((email_message.id_code)::text = (client.id_code)::text)

This is better, but still overestimates massively.

When I change empty values to NULLs, then this JOIN query starts
estimating correctly. So this one is probably confused because the
empty values would result in a cartesian join.

Are there any reasons why nestloop can't use the known (1 * 41) as its estimate?

Regards,
Marti Raudsepp
voicecom.ee

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] setlocale() on Windows is broken

2011-08-31 Thread Heikki Linnakangas

While looking through old emails, I bumped into this:

http://archives.postgresql.org/message-id/25219.1303306...@sss.pgh.pa.us

To recap, setlocale() on Windows is broken for locale names that contain 
dots or apostrophes in the country name. That includes "Hong Kong 
S.A.R.", "Macau S.A.R.", and "U.A.E." and "People's Republic of China".


In April, I put in a hack to initdb to map those problematic names to 
aliases that don't contain dots:


People's Republic of China -> China
Hong Kong S.A.R. -> HKG
U.A.E. -> ARE
Macau S.A.R. -> ZHM

However, Hiroshi pointed out in the thread linked above that that 
doesn't completely solve the problem. If you set locale to "HKG", for 
example, setlocale(LC_ALL, NULL) still returns the full name, "Hong Kong 
S.A.R.", and if you feed that back to setlocale() it fails. In 
particular, check_locale() uses "saved = setlocale(LC_XXX, NULL)" to get 
the current value, and tries to restore it later with "setlocale(LC_XXX, 
saved)".



At first, I thought I should revert my hack in initdb, since it's not 
fully solving the problem anyway. But it doesn't really help - you run 
into the same issue if you set locale to one of those aliases manually. 
And that's exactly what users will have to do if we don't map those 
locales automatically.


Microsoft should fix their bug. I don't have much faith in that 
happening, however. So, I think we should move the mapping from initdb 
to somewhere in src/port, so that the mapping is done every time 
setlocale() is called. That would fix the problem with check_locale(): 
even though "setlocale(LC_XXX, NULL)" returns a value that won't work, 
the setlocale() call to restore it would map it to an alias that does 
work again.


In addition to that, I think we should check the return value of 
setlocale() in check_locale(), and throw a warning if restoring the old 
locale fails. The session's locale will still be screwed, but at least 
you'll know if it happens.


I'll go write a patch for that.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Informix FDW - anybody working on this?

2011-08-31 Thread Bernd Helmle

Out of curiosity,

is anybody working on $subject? I'm currently planning to work on such a driver,
but given the current stream of new drivers i want to make sure to not 
duplicate any efforts...


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] limit in subquery causes poor selectivity estimation

2011-08-31 Thread Peter Eisentraut
On lör, 2011-08-27 at 13:32 -0400, Tom Lane wrote:
> > EXPLAIN SELECT * FROM test1  WHERE sha1 in (SELECT sha1 FROM test2
> LIMIT 200);
> 
> > Here, however, it has apparently not passed this knowledge through
> the
> > LIMIT.
> 
> The LIMIT prevents the subquery from being flattened entirely, ie we
> don't have just "test1 SEMI JOIN test2" but "test1 SEMI JOIN (SELECT *
> FROM test2 LIMIT 200)".  If you look at examine_variable in selfuncs.c
> you'll note that it punts for Vars coming from unflattened subqueries.
> 
> > So what's up with that?  Just a case of, we haven't thought about
> > covering this case yet, or are there larger problems?
> 
> The larger problem is that if a subquery didn't get flattened, it's
> often because it's got LIMIT, or GROUP BY, or some similar clause that
> makes it highly suspect whether the statistics available for the table
> column are reasonable to use for the subquery outputs.  It wouldn't be
> that hard to grab the stats for test2.sha1, but then how do you want
> to adjust them to reflect the LIMIT?

It turns out that this is a regression introduced in 8.4.8; the same
topic is also being discussed in

http://archives.postgresql.org/pgsql-performance/2011-08/msg00248.php

and

http://archives.postgresql.org/pgsql-general/2011-08/msg00995.php

This is the (previously posted) plan with 8.4.8:

QUERY PLAN  
  
--
 Hash Join  (cost=10.60..34.35 rows=500 width=31)
   Hash Cond: (test1.sha1 = test2.sha1)
   ->  Seq Scan on test1  (cost=0.00..18.00 rows=1000 width=31)
   ->  Hash  (cost=8.10..8.10 rows=200 width=32)
 ->  HashAggregate  (cost=6.10..8.10 rows=200 width=32)
   ->  Limit  (cost=0.00..3.60 rows=200 width=21)
 ->  Seq Scan on test2  (cost=0.00..18.01 rows=1001 
width=21)

And this is the plan with 8.4.7:

QUERY PLAN  
  
--
 Hash Join  (cost=10.80..34.55 rows=200 width=31)
   Hash Cond: (test1.sha1 = test2.sha1)
   ->  Seq Scan on test1  (cost=0.00..18.00 rows=1000 width=31)
   ->  Hash  (cost=8.30..8.30 rows=200 width=32)
 ->  HashAggregate  (cost=6.30..8.30 rows=200 width=32)
   ->  Limit  (cost=0.00..3.80 rows=200 width=21)
 ->  Seq Scan on test2  (cost=0.00..19.01 rows=1001 
width=21)

I liked the old one better. ;-)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgesql-9.0.4 compile on AIX 6.1 using gcc 4.4.6

2011-08-31 Thread Albe Laurenz
Wilfried Weiss wrote:
> I am just trying to compile postgresql-9.0.4 on AIX 6100-06-03-1048
using gcc 4.4.6.

> There was also:
>
> "[Bug target/46072] AIX linker chokes on debug info for uninitialized
static variables"

> Does any one know whether there is an alternate way to compile
postgresql on AIX 6.1 using gcc???
>
> I appreciate even the smallest hint!

I don't have any AIX boxes to play with any more, I guess
(after reading the bug description) that it should work if
you compile without generating debug info (-g).

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.conf archive_command example

2011-08-31 Thread Cédric Villemain
2011/8/30 Peter Eisentraut :
> I think it would be useful to add the following explanation and sample
> to the postgresql.conf sample file:
>
> diff --git i/src/backend/utils/misc/postgresql.conf.sample 
> w/src/backend/utils/misc/postgresql.conf.sample
> --- i/src/backend/utils/misc/postgresql.conf.sample
> +++ w/src/backend/utils/misc/postgresql.conf.sample
> @@ -186,6 +186,9 @@
>  #archive_mode = off            # allows archiving to be done
>                                # (change requires restart)
>  #archive_command = ''          # command to use to archive a logfile segment
> +                               # placeholders: %p = path of file to archive
> +                               #               %f = file name only
> +                               # e.g. 'test ! -f /mnt/server/archivedir/%f 
> && cp %p /mnt/server/archivedir/%f'
>  #archive_timeout = 0           # force a logfile segment switch after this
>                                # number of seconds; 0 disables
>
> This corresponds to what we have in the documentation and mirrors the
> example in recovery.conf.sample.
>
> Objections?

No objections, it is welcome.
Just a question: can we build a different postgresql.conf for windows
or do we add a windows command example here as well ?

>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comparing two PostgreSQL databases -- order of pg_dump output

2011-08-31 Thread Peter Eisentraut
On tis, 2011-08-30 at 18:07 -0400, Tom Lane wrote:
> Joe Abbate  writes:
> > In order to compare the schema of two presumably identical databases, 
> > I've been diffing the output of pg_dump -Osx.  However, I've found that 
> > the order of the output is not very reliable.
> 
> Yeah, we've been around on that before.  pg_dump does actually sort the
> output items (modulo dependency requirements), but it sorts by the same
> "tag" values that are printed by pg_restore -l, and those aren't currently
> designed to be unique.  It's not too clear if we could get away with
> changing the definitions of the tag strings.

It's a bit strange that the tag for a trigger is "name" but the tag for
the trigger's comment is "name ON table".  Not having the table name in
the trigger tag sounds wrong, because it makes the tag not very useful
for selecting the trigger from the TOC.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comparing two PostgreSQL databases -- order of pg_dump output

2011-08-31 Thread Peter Eisentraut
On tis, 2011-08-30 at 19:11 -0400, Stephen Frost wrote:
> * Joe Abbate (j...@freedomcircle.com) wrote:
> > In order to compare the schema of two presumably identical
> > databases, I've been diffing the output of pg_dump -Osx.  
> 
> I'm not sure exactly how it does it, but check_postgres.pl offers this.
> 
> http://bucardo.org/wiki/Check_postgres

That tool is also not without bugs in this regard.

Also, the interface it works with necessarily doesn't offer a good way
to examine the differences in detail; it only shows you that there are
differences.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inputting relative datetimes

2011-08-31 Thread Dean Rasheed
On 30 August 2011 16:40, Robert Haas  wrote:
> OK, committed.

Thanks.

I'm fine with not back-patching it, on the grounds given.

Cheers,
Dean

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers