Re: [HACKERS] pg_dump --pretty-print-views

2013-02-01 Thread Jeevan Chalke
On Thu, Jan 31, 2013 at 4:40 PM, Dimitri Fontaine dimi...@2ndquadrant.frwrote:

 Andrew Dunstan and...@dunslane.net writes:
  Well, we could actually set the wrap value to 0, which would mean always
  wrap. That wouldn't be making any assumption about the user's terminal
  window size ;-)

 +1


+1

After looking at both the SQL, I too like wrapped case.



  Personally I find the wrapped case MUCH more readable. I guess anything
 is
  an advance, but turning on PRETTY_INDENT without turning on some level of
  target wrapping seems like a half-done job.

 Minor gripe: the CASE WHEN THEN indenting is not ideal:

   CASE
   WHEN (a.attnotnull OR ((t.typtype = 'd'::char) AND t.typnotnull))
 THEN 'NO'::text
   ELSE 'YES'::text
   END)::information_schema.yes_or_no AS is_nullable,

 I think the following is easier to read:

   CASE
   WHEN (a.attnotnull OR ((t.typtype = 'd'::char) AND t.typnotnull))
   THEN 'NO'::text
   ELSE 'YES'::text
   END)::information_schema.yes_or_no AS is_nullable,

 But I realise it's not exactly what's being talked about in this thread…

 Regards,
 --
 Dimitri Fontaine
 http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support




-- 
Jeevan B Chalke
Senior Software Engineer, RD
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Phone: +91 20 30589500

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are
not the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.


Re: [HACKERS] Cascading replication: should we detect/prevent cycles?

2013-02-01 Thread Mark Kirkwood

On 01/02/13 20:43, Peter Geoghegan wrote:



On Sunday, 27 January 2013, Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com wrote:
  If we're going to start installing safeguards against doing stupid
  things, there's a long list of scenarios that happen far more
  regularly than this ever will and cause far more damage.

+1


+1

...and there are other areas that we could spend our energy on that 
would be more worthwhile I think. One I'd like to see is the opposite of:


$ pg_ctl promote

i.e:

$ pg_ctl demote

So a retired master would read a (newly supplied perhaps) 
recovery.conf and start to apply changes from there (with suitable 
safeguards). We have failover pretty painless now... but reconstruction 
of the original primary as a new standby is still too 
fiddly/resource/time consuming etc.


Regards

Mark



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Unusually high IO for autovacuum worker

2013-02-01 Thread Pavan Deolasee
(re-adding hackers)

On Fri, Feb 1, 2013 at 2:46 PM, Vlad Bailescu v...@mojitosoftware.com wrote:

 I'm pretty sure the io is from the autovacuum on master table because it's
 last_autovacuum stats update almost every minute

That only proves that the master table is being vacuumed, but does not
necessarily say that others are not. I would suggest turning auto
vacuum logging on.

 and iotop shows something
 like:

 Total DISK READ:   5.80 M/s | Total DISK WRITE: 115.85 K/s
   TID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IO COMMAND
  7681 be/4 postgres5.93 M/s0.00 B/s  0.00 % 63.62 % postgres:
 autovacuum worker process   fleet

Whats interesting is that there is a lot of reads happening, but very
little writes. I can see a scenario when this can happen even with
visibility maps because vacuum honors visibility map only if it can
skip 32 blocks at stretch, but your other claim that this goes on for
90 seconds even though the table is just couple of MBs is hard to
justify. How does the auto vacuum settings look like ? I wonder if you
have scaled down naptime and threshold just too low.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Unusually high IO for autovacuum worker

2013-02-01 Thread Pavan Deolasee
On Fri, Feb 1, 2013 at 3:07 PM, Pavan Deolasee pavan.deola...@gmail.com wrote:
 (re-adding hackers)


Apologies. Did not realize that the original email was on
pgsql-general. Vlad, please copy -general on your responses.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication: should we detect/prevent cycles?

2013-02-01 Thread Craig Ringer
On 01/28/2013 02:13 AM, Robert Haas wrote:
 If we're going to start installing safeguards against doing stupid
 things, there's a long list of scenarios that happen far more
 regularly than this ever will and cause far more damage. 
I'm not sure this approach is consistent with other decisions made in
the past, and with the project's general goals of stability and quality.

In particular, by the above rationale, why would the project have for
8.3 removed the implicit casts to/from 'text'? It's a minor safeguard
against users doing stupid things. Many other such safeguards exist -
and despite my frustration with some details of the implicit casts,
overall I think that such safeguards are a good and desirable thing.

Safeguards to stop users doing stupid things are part of good usability,
so long as they do not interfere excessively with performance,
functionality, maintainability, etc. Even where data loss/corruption is
not a risk, if it's feasable to detect a misconfiguration, bad command,
etc, IMHO it generally makes sense to do so. Otherwise we land up in
MySQL-land, littered with foot-guns, quirks, features that are easy to
misconfigure and hard to tell if they're misconfigured (PITR and
streaming replication are already in that category IMO), and generally
get a reputation as being hard to use, hard to troubleshoot, and painful.

I don't mean to be contrary; I realise that I've raised a differing view
a fair bit recently, but I'm only doing so when I think there's
something to contribute by doing so. Posting a me too when I agree is
rather less helpful, so its the differing views that tend to actually
get posted, despite being in the minority.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.3] writable foreign tables

2013-02-01 Thread Daniel Farina
On Tue, Jan 29, 2013 at 1:19 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 I noticed the v10 patch cannot be applied on the latest master branch
 cleanly. The attached ones were rebased.

Hello,

I'm just getting started looking at this, but notice that the second
patch relies on contrib/postgres_fdw to apply, but it's not clear to
me where to get the correct version of that.  One way to solve this
would be to tell me where to get a version of that, and another that
may work well would be to relay a Git repository with postgres_fdw and
the writable fdw changes accumulated.

I poked around on git.postgresql.org and github and wasn't able to
find a recent pushed copy of this.

Anyway, I'm looking at the first patch, which applies neatly. Thanks.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.3] writable foreign tables

2013-02-01 Thread Kohei KaiGai
2013/2/1 Daniel Farina dan...@heroku.com:
 On Tue, Jan 29, 2013 at 1:19 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 I noticed the v10 patch cannot be applied on the latest master branch
 cleanly. The attached ones were rebased.

 Hello,

 I'm just getting started looking at this, but notice that the second
 patch relies on contrib/postgres_fdw to apply, but it's not clear to
 me where to get the correct version of that.  One way to solve this
 would be to tell me where to get a version of that, and another that
 may work well would be to relay a Git repository with postgres_fdw and
 the writable fdw changes accumulated.

 I poked around on git.postgresql.org and github and wasn't able to
 find a recent pushed copy of this.

 Anyway, I'm looking at the first patch, which applies neatly. Thanks.

Thanks for your reviewing.

The second part assumes the latest (v5) postgres_fdw patch being
applied. It has been in status of ready-for-committer since last CF,
and nothing was changed.
  https://commitfest.postgresql.org/action/patch_view?id=940

If I oversight nothing, it is the latest version I reviewed before.
Is there somebody who can volunteer to review his patch and commit it?

Hanada-san, if you have some updates, please share it with us.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Visual Studio 2012 RC

2013-02-01 Thread Craig Ringer
On 01/26/2013 10:56 PM, Noah Misch wrote:
 On Sat, Jan 26, 2013 at 12:20:56PM +0800, Craig Ringer wrote:
 Just to confirm, I think that this is ready for commit as posted in
 20130101025421.ga17...@tornado.leadboat.com.

 I'll amend my docs changes and submit them separately.
 Thanks.  Here's a rebased version.
Is there any chance someone could pick this up for 9.3 before it
diverges too much? It's ready to go, Windows only, and tested.

https://commitfest.postgresql.org/action/patch_view?id=1023


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



[HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-02-01 Thread Pavel Stehule
Hello

now a most hard work is done and I would to enable access to new
error fields from plpgsql.

these new fields are column_name, constraint_name, datatype_name,
table_name and schema_name.

This proposal has impact on two plpgsql statements - RAISE and GET
STACKED DIAGNOSTICS.

I am sending initial implementation

Comments, notes?

Regards

Pavel


enhanced_error_fields_plpgsql_initial.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-02-01 Thread Marko Tiikkaja

On 2/1/13 1:47 PM, Pavel Stehule wrote:

now a most hard work is done and I would to enable access to new
error fields from plpgsql.


Is there a compelling reason why we wouldn't provide these already in 9.3?



Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-02-01 Thread Pavel Stehule
2013/2/1 Marko Tiikkaja pgm...@joh.to:
 On 2/1/13 1:47 PM, Pavel Stehule wrote:

 now a most hard work is done and I would to enable access to new
 error fields from plpgsql.


 Is there a compelling reason why we wouldn't provide these already in 9.3?

a time for assign to last commitfest is out.

this patch is relative simple and really close to enhanced error
fields feature - but depends if some from commiters will have a time
for commit to 9.3 - so I am expecting primary target 9.4, but I am not
be angry if it will be commited early.

Regards

Pavel




 Regards,
 Marko Tiikkaja


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Pavan Deolasee
On Fri, Feb 1, 2013 at 6:10 PM, Pavan Deolasee pavan.deola...@gmail.comwrote:





 2012-12-05 00:44:23 EET LOG:  automatic analyze of table
 fleet.fleet.vehicle_position system usage: CPU 4.46s/0.61u sec elapsed
 465.09 sec


 This is the interesting piece of information. So its the auto analyze
 thats causing all
 the IO activity. That explains why it was a read only IO that we noticed
 earlier. Whats
 happening here, and something that changed from 8.4 to 9.1, is that
 whenever the parent
 table is analyzed, the child tables are also automatically analyzed. I
 don't remember the
 rational for doing this change, but in your case the analyze on the parent
 table itself is
 quite useless because even though you inserting a large number of new
 tuples, you are
 also immediately deleting them. I don't want to comment on the design
 aspect of that,
 but you should be able to fix this problem by disabling auto-analyze on
 the parent table.

 Having said that, I don't see an easy way to just disable auto-analyze on
 a table. You can
  run ALTER TABLE foo SET (autovacuum_enabled = false), but that would also
 disable
 auto-vacuum, which you certainly don't want to do because the parent table
 would just
 keep growing.

  You can set autovacuum_analyze_threshold to an artificially high value
 to mitigate the
 problem and reduce the frequency of auto-analyze on the table or see if
 you can completely
 avoid insert/delete on the parent table.

 ALTER TABLE vehicle_position SET (autovacuum_analyze_threshold  =
 20);


While looking at this particular case on -general, I realized that there is
no way to *only* disable auto-analyze on a table. While one can cheat like
what I suggested to the OP by setting threshold very high, I think it will
be useful to be able to just off analyze. In this particular case, the OP
is inserting and then deleting the same rows from the parent table, thus
keeping it almost empty. Of course, he would want to run auto-vacuum on the
table to remove the dead rows. Usually auto-analyze would have returned
quite fast, especially because we vacuum a table first and then analyze it.
But in this case, since the table is a parent of a number of large child
tables, we end up analyzing the child tables too, which takes significantly
longer time and is quite unnecessary because in this case the activity on
the parent table must not have changed any stats for the child tables.

A new reloption such as autovacuum_analyze_enabled is what we need.

Comments ?

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


Re: [HACKERS] parameter info?

2013-02-01 Thread Andrew Dunstan


On 01/31/2013 11:17 PM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

What's the best way for me to find out if a given parameter of a
function is a constant? The context is that it's expensive to process,
and in most cases will in fact be a constant, so if it is in fact a
constant I'd like to process it once and stash the results.

I think we added get_fn_expr_arg_stable() for precisely that
purpose.





Thanks. That's exactly what I was looking for.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-02-01 Thread Peter Eisentraut
On 2/1/13 8:00 AM, Pavel Stehule wrote:
 2013/2/1 Marko Tiikkaja pgm...@joh.to:
 On 2/1/13 1:47 PM, Pavel Stehule wrote:

 now a most hard work is done and I would to enable access to new
 error fields from plpgsql.


 Is there a compelling reason why we wouldn't provide these already in 9.3?
 
 a time for assign to last commitfest is out.
 
 this patch is relative simple and really close to enhanced error
 fields feature - but depends if some from commiters will have a time
 for commit to 9.3 - so I am expecting primary target 9.4, but I am not
 be angry if it will be commited early.

If we don't have access to those fields on PL/pgSQL, what was the point
of the patch to begin with?  Surely, accessing them from C wasn't the
main use case?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] obsolete code

2013-02-01 Thread Andrew Dunstan


fmgr.c contains this:

   /*
 * !!! OLD INTERFACE !!!
 *
 * fmgr() is the only remaining vestige of the old-style caller support
 * functions.  It's no longer used anywhere in the Postgres
   distribution,
 * but we should leave it around for a release or two to ease the
   transition
 * for user-supplied C functions.  OidFunctionCallN() replaces it
   for new
 * code.
 *
 * DEPRECATED, DO NOT USE IN NEW CODE
 */

which git tells me dates from:

   48165ec2(  Tom Lane 2000-06-05 07:29:25 + 2080)


Should we just drop all support for the old interface now?

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Back-branch update releases coming in a couple weeks

2013-02-01 Thread Andres Freund
On 2013-01-22 22:19:25 -0500, Tom Lane wrote:
 Since we've fixed a couple of relatively nasty bugs recently, the core
 committee has determined that it'd be a good idea to push out PG update
 releases soon.  The current plan is to wrap on Monday Feb 4 for public
 announcement Thursday Feb 7.  If you're aware of any bug fixes you think
 ought to get included, now's the time to get them done ...

I think the patch in
http://www.postgresql.org/message-id/20130130145521.gb3...@awork2.anarazel.de
should get applied before the release if we decide it should be
backpatched. And I think backpatching makes sense, the amount of useless
full-table-scans will sure hurt more than some workload thats
accidentally fixed by this.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-02-01 Thread Pavel Stehule
2013/2/1 Peter Eisentraut pete...@gmx.net:
 On 2/1/13 8:00 AM, Pavel Stehule wrote:
 2013/2/1 Marko Tiikkaja pgm...@joh.to:
 On 2/1/13 1:47 PM, Pavel Stehule wrote:

 now a most hard work is done and I would to enable access to new
 error fields from plpgsql.


 Is there a compelling reason why we wouldn't provide these already in 9.3?

 a time for assign to last commitfest is out.

 this patch is relative simple and really close to enhanced error
 fields feature - but depends if some from commiters will have a time
 for commit to 9.3 - so I am expecting primary target 9.4, but I am not
 be angry if it will be commited early.

 If we don't have access to those fields on PL/pgSQL, what was the point
 of the patch to begin with?  Surely, accessing them from C wasn't the
 main use case?


These fields are available for application developers now. But is a
true, so without this patch, GET STACKED DIAGNOSTICS statement will
not be fully consistent, because some fields are accessible and others
not

Pavel


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-02-01 Thread Peter Eisentraut
On 1/31/13 5:42 PM, MauMau wrote:
 Thank you for sharing your experience.  So you also considered making
 postmaster SIGKILL children like me, didn't you?  I bet most of people
 who encounter this problem would feel like that.
 
 It is definitely pg_ctl who needs to be prepared, not the users.  It may
 not be easy to find out postgres processes to SIGKILL if multiple
 instances are running on the same host.  Just doing pkill postgres
 will unexpectedly terminate postgres of other instances.

In my case, it was one backend process segfaulting, and then some other
backend processes didn't respond to the subsequent SIGQUIT sent out by
the postmaster.  So pg_ctl didn't have any part in it.

We ended up addressing that by installing a nagios event handler that
checked for this situation and cleaned it up.

 I would like to make a patch which that changes SIGQUIT to SIGKILL when
 postmaster terminates children.  Any other better ideas?

That was my idea back then, but there were some concerns about it.

I found an old patch that I had prepared for this, which I have
attached.  YMMV.
From bebb95abe7a55173cab0558da3373d6a3631465b Mon Sep 17 00:00:00 2001
From: Peter Eisentraut pet...@postgresql.org
Date: Wed, 16 Dec 2009 17:19:14 +0200
Subject: [PATCH 3/3] Time out the ereport() call in quickdie() after 60 seconds

---
 src/backend/tcop/postgres.c |   25 +
 1 files changed, 25 insertions(+), 0 deletions(-)

diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index b2fb501..ab6805a 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -191,6 +191,7 @@ static bool IsTransactionExitStmtList(List *parseTrees);
 static bool IsTransactionStmtList(List *parseTrees);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
+static void quickdie_alarm_handler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
 
 
@@ -2539,9 +2540,17 @@ void
 quickdie(SIGNAL_ARGS)
 {
sigaddset(BlockSig, SIGQUIT); /* prevent nested calls */
+   sigdelset(BlockSig, SIGALRM);
PG_SETMASK(BlockSig);
 
/*
+* Set up a timeout in case the ereport() call below blocks for a
+* long time.
+*/
+   pqsignal(SIGALRM, quickdie_alarm_handler);
+   alarm(60);
+
+   /*
 * If we're aborting out of client auth, don't risk trying to send
 * anything to the client; we will likely violate the protocol,
 * not to mention that we may have interrupted the guts of OpenSSL
@@ -2586,6 +2595,22 @@ quickdie(SIGNAL_ARGS)
 }
 
 /*
+ * Take over quickdie()'s work if the alarm expired.
+ */
+static void
+quickdie_alarm_handler(SIGNAL_ARGS)
+{
+   /*
+* We got here if ereport() was blocking, so don't go there again
+* except when really asked for.
+*/
+   elog(DEBUG5, quickdie aborted by alarm);
+
+   on_exit_reset();
+   exit(2);
+}
+
+/*
  * Shutdown signal from postmaster: abort transaction and exit
  * at soonest convenient time
  */
-- 
1.6.5


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Visual Studio 2012 RC

2013-02-01 Thread Andrew Dunstan


On 02/01/2013 06:12 AM, Craig Ringer wrote:

On 01/26/2013 10:56 PM, Noah Misch wrote:

On Sat, Jan 26, 2013 at 12:20:56PM +0800, Craig Ringer wrote:

Just to confirm, I think that this is ready for commit as posted in
20130101025421.ga17...@tornado.leadboat.com.

I'll amend my docs changes and submit them separately.

Thanks.  Here's a rebased version.
Is there any chance someone could pick this up for 9.3 before it 
diverges too much? It's ready to go, Windows only, and tested.


https://commitfest.postgresql.org/action/patch_view?id=1023






I expect to get to it in due course if nobody else does. I have been set 
back some by the death and replacement of my Windows workstation, (plus 
coming to terms with Windows 8).


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-02-01 Thread Pavel Stehule
2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
 2013/2/1 Peter Eisentraut pete...@gmx.net:
 On 2/1/13 8:00 AM, Pavel Stehule wrote:
 2013/2/1 Marko Tiikkaja pgm...@joh.to:
 On 2/1/13 1:47 PM, Pavel Stehule wrote:

 now a most hard work is done and I would to enable access to new
 error fields from plpgsql.


 Is there a compelling reason why we wouldn't provide these already in 9.3?

 a time for assign to last commitfest is out.

 this patch is relative simple and really close to enhanced error
 fields feature - but depends if some from commiters will have a time
 for commit to 9.3 - so I am expecting primary target 9.4, but I am not
 be angry if it will be commited early.

 If we don't have access to those fields on PL/pgSQL, what was the point
 of the patch to begin with?  Surely, accessing them from C wasn't the
 main use case?


 These fields are available for application developers now. But is a
 true, so without this patch, GET STACKED DIAGNOSTICS statement will
 not be fully consistent, because some fields are accessible and others
 not

there is one stronger argument for commit this patch now. With this
patch, we are able to wrote regression tests for new fields via
plpgsql.

Regards

Pavel


 Pavel


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-02-01 Thread Andres Freund
On 2013-02-01 08:55:24 -0500, Peter Eisentraut wrote:
 On 1/31/13 5:42 PM, MauMau wrote:
  Thank you for sharing your experience.  So you also considered making
  postmaster SIGKILL children like me, didn't you?  I bet most of people
  who encounter this problem would feel like that.
  
  It is definitely pg_ctl who needs to be prepared, not the users.  It may
  not be easy to find out postgres processes to SIGKILL if multiple
  instances are running on the same host.  Just doing pkill postgres
  will unexpectedly terminate postgres of other instances.
 
 In my case, it was one backend process segfaulting, and then some other
 backend processes didn't respond to the subsequent SIGQUIT sent out by
 the postmaster.  So pg_ctl didn't have any part in it.
 
 We ended up addressing that by installing a nagios event handler that
 checked for this situation and cleaned it up.
 
  I would like to make a patch which that changes SIGQUIT to SIGKILL when
  postmaster terminates children.  Any other better ideas?
 
 That was my idea back then, but there were some concerns about it.
 
 I found an old patch that I had prepared for this, which I have
 attached.  YMMV.

 +static void
 +quickdie_alarm_handler(SIGNAL_ARGS)
 +{
 + /*
 +  * We got here if ereport() was blocking, so don't go there again
 +  * except when really asked for.
 +  */
 + elog(DEBUG5, quickdie aborted by alarm);
 +

Its probably not wise to enter elog.c again, that path might allocate
memory and we wouldn't be any wiser. Unfortunately there's not much
besides a write(2) to stderr that can safely be done...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] HOT on tables with oid indexes broken

2013-02-01 Thread Alvaro Herrera
Andres Freund wrote:
 
 Hi,
 
 The fklocks patch moved HeapSatisfiesHOTandKeyUpdate (or rather
 HeapSatisfiesHOTUpdate back then) to be called way earlier in
 heap_update as its needed to know which lock level is
 required. Unfortunately the oid of the new tuple isn't yet setup at that
 point.

Applied, thanks.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-02-01 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-02-01 08:55:24 -0500, Peter Eisentraut wrote:
 I found an old patch that I had prepared for this, which I have
 attached.  YMMV.

 +static void
 +quickdie_alarm_handler(SIGNAL_ARGS)
 +{
 +/*
 + * We got here if ereport() was blocking, so don't go there again
 + * except when really asked for.
 + */
 +elog(DEBUG5, quickdie aborted by alarm);
 +

 Its probably not wise to enter elog.c again, that path might allocate
 memory and we wouldn't be any wiser. Unfortunately there's not much
 besides a write(2) to stderr that can safely be done...

Another objection to this is it still doesn't fix the case where the
process has blocked SIGQUIT.  My faith that libperl, libR, etc will
never do that is nil.

I think if we want something that's actually trustworthy, the
wait-and-then-kill logic has to be in the postmaster.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Alvaro Herrera
Andres Freund wrote:

 If youre careful you can also notice that there is an interesting typo
 in the freeze table computation. Namely it uses freeze_min_age instead
 of freeze_table_age. Which probably explains why I had so bad
 performance results with lowering vacuum_freeze_min_age, it basically
 radically increases the amount of full-table-scans, far more than it
 should.
 
 I can't imagine that anybody with a large database ran pg successfully
 with a small freeze_min_age due to this.
 
 It seems to be broken since the initial introduction of freeze_table_age
 in 6587818542e79012276dcfedb2f97e3522ee5e9b. I guess it wasn't noticed
 because the behaviour is only visible via autovacuum because a
 user-issued VACUUM passes -1 as freeze_min_age.

Backpatched all the way back to 8.4

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Tom Lane
Pavan Deolasee pavan.deola...@gmail.com writes:
 While looking at this particular case on -general, I realized that there is
 no way to *only* disable auto-analyze on a table. While one can cheat like
 what I suggested to the OP by setting threshold very high, I think it will
 be useful to be able to just off analyze. In this particular case, the OP
 is inserting and then deleting the same rows from the parent table, thus
 keeping it almost empty. Of course, he would want to run auto-vacuum on the
 table to remove the dead rows. Usually auto-analyze would have returned
 quite fast, especially because we vacuum a table first and then analyze it.
 But in this case, since the table is a parent of a number of large child
 tables, we end up analyzing the child tables too, which takes significantly
 longer time and is quite unnecessary because in this case the activity on
 the parent table must not have changed any stats for the child tables.

 A new reloption such as autovacuum_analyze_enabled is what we need.

This seems to me to be a wart that doesn't fix the actual problem ---
the actual problem is to make the autovac daemon smarter about when an
inheritance-tree ANALYZE pass is needed.  That should be done somehow
based on the total row churn across the parent + children.  Looking
at the parent only, as we do now, can result in analyzing too often
(the OP's case) or too seldom (the much more common case).  A manual
off switch fixes only the less common case, and requires user
intervention that we'd be better off without.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Alvaro Herrera
Pavan Deolasee escribió:

 While looking at this particular case on -general, I realized that there is
 no way to *only* disable auto-analyze on a table. While one can cheat like
 what I suggested to the OP by setting threshold very high, I think it will
 be useful to be able to just off analyze. In this particular case, the OP
 is inserting and then deleting the same rows from the parent table, thus
 keeping it almost empty. Of course, he would want to run auto-vacuum on the
 table to remove the dead rows. Usually auto-analyze would have returned
 quite fast, especially because we vacuum a table first and then analyze it.
 But in this case, since the table is a parent of a number of large child
 tables, we end up analyzing the child tables too, which takes significantly
 longer time and is quite unnecessary because in this case the activity on
 the parent table must not have changed any stats for the child tables.
 
 A new reloption such as autovacuum_analyze_enabled is what we need.

I was thinking in this option just three days ago, so yeah.

I think we also want an option to turn off just vacuum.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 fmgr.c contains this:

   * DEPRECATED, DO NOT USE IN NEW CODE

 Should we just drop all support for the old interface now?

Is there any actual benefit to removing it?  I don't recall that
it's been the source of any maintenance burden.  I'd be fine with
dropping it if it were costing us something measurable, but ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Pavan Deolasee
On Fri, Feb 1, 2013 at 9:04 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Pavan Deolasee pavan.deola...@gmail.com writes:

 A new reloption such as autovacuum_analyze_enabled is what we need.

 This seems to me to be a wart that doesn't fix the actual problem ---

IMHO this case is just an example, but I'm sure there would be similar
such examples which may not involve inheritance. For example, say user
has a very large table which is updated very frequently but not in a
way that his query plans will be affected. The user may want to turn
auto analyze in such cases. And given that we allow the user to
control all other parameters, I don't understand why we would not let
him turn it off completely.

There is another problem that I noticed while looking at this case.
The analyze took close to 500sec on a fairly good hardware (40GB RAM,
10K rpm disks on RAID10) because many large child tables were scanned
at once. We analyze all of them in a single transaction. This long
running transaction will cause a lot of bloat for heavily updated
tables since HOT will fail to keep up. I wonder if we should set up
the child tables in the tableoid_list just like we do for toast tables
so that each table is analyzed in its own transaction. This is also
important because partitioning will typically involve very large
tables.

Of course, if we could ever run analyze on a single table in multiple
smaller transactions, that will be even better. But I'm not sure if
thats feasible.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming-only cascading replica won't come up without writes on the master

2013-02-01 Thread Heikki Linnakangas

On 01.02.2013 02:04, Josh Berkus wrote:

I thought this was only a 9.3 issue, but it turns out to be
reproduceable on 9.2.2.  Basically, I did:

1. master is queicent ... no writes occuring.
2. createded cascading replica (reprep1) from replica (repmaster)
3. reprep1 remains in recovery mode until a write occurs on master

I've been able to reproduce this several times on my laptop using
postmasters running on different ports.


Hmm, that sounds like the issue discussed here: 
http://www.postgresql.org/message-id/50c7612e.5060...@vmware.com, and 
fixed by this commit:


http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fb565f8c9616ec8ab1b5176d16f310725e581e6e

Can you check if you still see that behavior with REL9_2_STABLE?

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Andrew Dunstan


On 02/01/2013 10:38 AM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

fmgr.c contains this:
   * DEPRECATED, DO NOT USE IN NEW CODE
Should we just drop all support for the old interface now?

Is there any actual benefit to removing it?  I don't recall that
it's been the source of any maintenance burden.  I'd be fine with
dropping it if it were costing us something measurable, but ...





My hope was that if we got rid of the old stuff we wouldn't need to use

   PG_FUNCTION_INFO_V1(myfunc);



in external modules any more (I recently got bitten through forgetting 
this and it cost me an hour or two).


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Pavan Deolasee
On Fri, Feb 1, 2013 at 9:07 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Pavan Deolasee escribió:


 A new reloption such as autovacuum_analyze_enabled is what we need.

 I was thinking in this option just three days ago, so yeah.

 I think we also want an option to turn off just vacuum.


+1. I think that will be useful too in some situations, especially
when user wants to let autovacuum take care of some tables  and
manually vacuum the rest.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-01 Thread Alvaro Herrera
Tomas Vondra wrote:

 diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
 index be3adf1..4ec485e 100644
 --- a/src/backend/postmaster/pgstat.c
 +++ b/src/backend/postmaster/pgstat.c
 @@ -64,10 +64,14 @@
  
  /* --
   * Paths for the statistics files (relative to installation's $PGDATA).
 + * Permanent and temprorary, global and per-database files.

Note typo in the line above.

 -#define PGSTAT_STAT_PERMANENT_FILENAME   global/pgstat.stat
 -#define PGSTAT_STAT_PERMANENT_TMPFILEglobal/pgstat.tmp
 +#define PGSTAT_STAT_PERMANENT_DIRECTORY  stat
 +#define PGSTAT_STAT_PERMANENT_FILENAME   stat/global.stat
 +#define PGSTAT_STAT_PERMANENT_TMPFILEstat/global.tmp
 +#define PGSTAT_STAT_PERMANENT_DB_FILENAMEstat/%d.stat
 +#define PGSTAT_STAT_PERMANENT_DB_TMPFILE stat/%d.tmp

 +char*pgstat_stat_directory = NULL;
  char*pgstat_stat_filename = NULL;
  char*pgstat_stat_tmpname = NULL;
 +char*pgstat_stat_db_filename = NULL;
 +char*pgstat_stat_db_tmpname = NULL;

I don't like the quoted parts very much; it seems awkward to have the
snprintf patterns in one place and have them be used in very distant
places.  Is there a way to improve that?  Also, if I understand clearly,
the pgstat_stat_db_filename value needs to be an snprintf pattern too,
right?  What if it doesn't contain the required % specifier?

Also, if you can filter this through pgindent, that would be best.  Make
sure to add DBWriteRequest to src/tools/pgindent/typedefs_list.

 + /*
 +  * There's no request for this DB yet, so lets create it 
 (allocate a
 +  * space for it, set the values).
 +  */
 + if (last_statrequests == NULL)
 + last_statrequests = palloc(sizeof(DBWriteRequest));
 + else
 + last_statrequests = repalloc(last_statrequests,
 + 
 (num_statrequests + 1)*sizeof(DBWriteRequest));
 + 
 + last_statrequests[num_statrequests].databaseid = 
 msg-databaseid;
 + last_statrequests[num_statrequests].request_time = 
 msg-clock_time;
 + num_statrequests += 1;

Having to repalloc this array each time seems wrong.  Would a list
instead of an array help?  see ilist.c/h; I vote for a dlist because you
can easily delete elements from the middle of it, if required (I think
you'd need that.)

 + char db_statfile[strlen(pgstat_stat_db_filename) + 11];
 + snprintf(db_statfile, strlen(pgstat_stat_db_filename) + 11,
 +  pgstat_stat_filename, dbentry-databaseid);

This pattern seems rather frequent.  Can we use a macro or similar here?
Encapsulating the 11 better would be good.  Magic numbers are evil.

 diff --git a/src/include/pgstat.h b/src/include/pgstat.h
 index 613c1c2..b3467d2 100644
 --- a/src/include/pgstat.h
 +++ b/src/include/pgstat.h
 @@ -205,6 +205,7 @@ typedef struct PgStat_MsgInquiry
   PgStat_MsgHdr m_hdr;
   TimestampTz clock_time; /* observed local clock time */
   TimestampTz cutoff_time;/* minimum acceptable file timestamp */
 + Oid databaseid; /* requested DB 
 (InvalidOid = all DBs) */
  } PgStat_MsgInquiry;

Do we need to support the case that somebody requests stuff from the
shared DB?  IIRC that's what InvalidOid means in pgstat ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 My hope was that if we got rid of the old stuff we wouldn't need to use
 PG_FUNCTION_INFO_V1(myfunc);
 in external modules any more (I recently got bitten through forgetting 
 this and it cost me an hour or two).

Oh.  Well, that's entirely unrelated to whether we leave fmgr() around.
fmgr() is the other end of the old call interface.

I'm not really thrilled with switching the default assumption to be V1,
especially not if we implement that by telling authors they can stop
bothering with the macros.  The pain will just come back sometime in the
future when we decide we need a function API V2.  (I'm actually a bit
surprised V1 has lived this long without changes.)

Here's a different straw-man proposal: let's start requiring *all*
external C functions to have an API-version block.  We can add a
PG_FUNCTION_INFO_V0(myfunc) macro for those people who are still using
V0 and don't feel like changing their code (and you know they're out
there).  For the rest of us, this would allow emitting an appropriate
error when we forget the macro.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Pavel Stehule
2013/2/1 Andrew Dunstan and...@dunslane.net:

 On 02/01/2013 10:38 AM, Tom Lane wrote:

 Andrew Dunstan and...@dunslane.net writes:

 fmgr.c contains this:
* DEPRECATED, DO NOT USE IN NEW CODE
 Should we just drop all support for the old interface now?

 Is there any actual benefit to removing it?  I don't recall that
 it's been the source of any maintenance burden.  I'd be fine with
 dropping it if it were costing us something measurable, but ...





 My hope was that if we got rid of the old stuff we wouldn't need to use

PG_FUNCTION_INFO_V1(myfunc);


removing function descriptor should not be good idea - it can be used
for some annotation.

I had a similar issue - and can be nice, if it is solved with some assertions.

Regards

Pavel



 in external modules any more (I recently got bitten through forgetting this
 and it cost me an hour or two).

 cheers

 andrew




 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Pavel Stehule
2013/2/1 Tom Lane t...@sss.pgh.pa.us:
 Andrew Dunstan and...@dunslane.net writes:
 My hope was that if we got rid of the old stuff we wouldn't need to use
 PG_FUNCTION_INFO_V1(myfunc);
 in external modules any more (I recently got bitten through forgetting
 this and it cost me an hour or two).

 Oh.  Well, that's entirely unrelated to whether we leave fmgr() around.
 fmgr() is the other end of the old call interface.

 I'm not really thrilled with switching the default assumption to be V1,
 especially not if we implement that by telling authors they can stop
 bothering with the macros.  The pain will just come back sometime in the
 future when we decide we need a function API V2.  (I'm actually a bit
 surprised V1 has lived this long without changes.)

 Here's a different straw-man proposal: let's start requiring *all*
 external C functions to have an API-version block.  We can add a
 PG_FUNCTION_INFO_V0(myfunc) macro for those people who are still using
 V0 and don't feel like changing their code (and you know they're out
 there).  For the rest of us, this would allow emitting an appropriate
 error when we forget the macro.

I like this idea,

Pavel


 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Pavel Stehule
2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
 2013/2/1 Tom Lane t...@sss.pgh.pa.us:
 Andrew Dunstan and...@dunslane.net writes:
 My hope was that if we got rid of the old stuff we wouldn't need to use
 PG_FUNCTION_INFO_V1(myfunc);
 in external modules any more (I recently got bitten through forgetting
 this and it cost me an hour or two).

 Oh.  Well, that's entirely unrelated to whether we leave fmgr() around.
 fmgr() is the other end of the old call interface.

 I'm not really thrilled with switching the default assumption to be V1,
 especially not if we implement that by telling authors they can stop
 bothering with the macros.  The pain will just come back sometime in the
 future when we decide we need a function API V2.  (I'm actually a bit
 surprised V1 has lived this long without changes.)

 Here's a different straw-man proposal: let's start requiring *all*
 external C functions to have an API-version block.  We can add a
 PG_FUNCTION_INFO_V0(myfunc) macro for those people who are still using
 V0 and don't feel like changing their code (and you know they're out
 there).  For the rest of us, this would allow emitting an appropriate
 error when we forget the macro.

 I like this idea,


but some users uses V0 for glibc functions - it is probably ugly hack,
but it allows using external libraries - and should be possible still
use it

Regards

Pavel



 Pavel


 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 but some users uses V0 for glibc functions - it is probably ugly hack,
 but it allows using external libraries - and should be possible still
 use it

No, I don't think we need or want to continue to support such an ugly,
ugly, hack.

+1 for adding a V0 and requiring it.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pkg-config files for libpq and ecpg

2013-02-01 Thread Peter Eisentraut
On 1/15/13 6:53 PM, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
 I'll take another stab at providing pkg-config files for the client-side
 libraries.
 
 This bit:
 
 +echo 'Libs.private: $(filter-out 
 $(PKG_CONFIG_REQUIRES_PRIVATE:lib%=-l%),$(filter-out -L..%, $(SHLIB_LINK)))' 
 $@
 
 appears to assume that SHLIB_LINK contains nothing except -L and -l
 switches.  I don't think I trust that a whole lot --- in fact, it
 looks guaranteed to fail on HPUX because of -print-libgcc-file-name.
 There might be other platform-specific bogosity on other platforms;
 PTHREAD_LIBS seems like a likely source for instance.

Updated patch addressing this concern.  Also added comments and
documentation.

diff --git a/.gitignore b/.gitignore
index 4df314c..8e227a2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -22,6 +22,7 @@ lcov.info
 win32ver.rc
 *.exe
 lib*dll.def
+lib*.pc
 
 # Local excludes in root directory
 /GNUmakefile
diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
index 4d904cd..68bcb13 100644
--- a/doc/src/sgml/ecpg.sgml
+++ b/doc/src/sgml/ecpg.sgml
@@ -5715,6 +5715,15 @@ titleProcessing Embedded SQL Programs/title
   /para
 
   para
+   You can
+   use 
commandpg_config/commandindextermprimarypg_config/primarysecondary 
sortas=ecpgwith
+   ecpg/secondary/indexterm
+   or 
commandpkg-config/commandindextermprimarypkg-config/primarysecondary 
sortas=ecpgwith
+   ecpg/secondary/indexterm with package name literallibecpg/literal to
+   get the paths for your installation.
+  /para
+
+  para
If you manage the build process of a larger project using
applicationmake/application, it might be convenient to include
the following implicit rule to your makefiles:
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index aa2ec2a..1794f9b 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -7641,6 +7641,18 @@ titleBuilding applicationlibpq/application 
Programs/title
  /para
 
  para
+  If you
+  have 
commandpkg-config/commandindextermprimarypkg-config/primarysecondary 
sortas=libpqwith
+  libpq/secondary/indexterm installed, you can run instead:
+screen
+prompt$/prompt pkg-config --cflags libpq
+computeroutput-I/usr/local/include/computeroutput
+/screen
+  Note that this will already include the option-I/option in front of
+  the path.
+ /para
+
+ para
   Failure to specify the correct option to the compiler will
   result in an error message such as:
 screen
@@ -7675,6 +7687,15 @@ titleBuilding applicationlibpq/application 
Programs/title
  /para
 
  para
+  Or again use commandpkg-config/command:
+screen
+prompt$/prompt pkg-config --libs libpq
+computeroutput-L/usr/local/pgsql/lib -lpq/computeroutput
+/screen
+  Note again that this prints the full options, not only the path.
+ /para
+
+ para
   Error messages that point to problems in this area could look like
   the following:
 screen
diff --git a/src/Makefile.shlib b/src/Makefile.shlib
index 4da2f10..2527fe9 100644
--- a/src/Makefile.shlib
+++ b/src/Makefile.shlib
@@ -87,6 +87,7 @@ shlib_bare= lib$(NAME)$(DLSUFFIX)
 # Testing the soname variable is a reliable way to determine whether a
 # linkable library is being built.
 soname = $(shlib_major)
+pkgconfigdir = $(libdir)/pkgconfig
 else
 # Naming convention for dynamically loadable modules
 shlib  = $(NAME)$(DLSUFFIX)
@@ -305,6 +306,7 @@ all-lib: all-shared-lib
 ifdef soname
 # no static library when building a dynamically loadable module
 all-lib: all-static-lib
+all-lib: lib$(NAME).pc
 endif
 
 all-static-lib: $(stlib)
@@ -388,6 +390,25 @@ $(stlib): $(shlib) $(DLL_DEFFILE) | $(SHLIB_PREREQS)
 endif # PORTNAME == cygwin || PORTNAME == win32
 
 
+%.pc: $(MAKEFILE_LIST)
+   echo 'Name: lib$(NAME)' $@
+   echo 'Description: PostgreSQL lib$(NAME) library' $@
+   echo 'Url: http://www.postgresql.org/' $@
+   echo 'Version: $(VERSION)' $@
+   echo 'Requires: ' $@
+   echo 'Requires.private: $(PKG_CONFIG_REQUIRES_PRIVATE)' $@
+# Record -I flags that the user might have passed in to the PostgreSQL
+# build to locate third-party libraries (e.g., ldap, ssl).  Filter out
+# those that point inside the build or source tree.  Use sort to
+# remove duplicates.
+   echo 'Cflags: -I$(includedir) $(sort $(filter-out -I.% 
-I$(top_srcdir)/%,$(filter -I%,$(CPPFLAGS' $@
+   echo 'Libs: -L$(libdir) -l$(NAME)' $@
+# Record -L flags, as above for Cflags.  Also record the -l flags
+# necessary for static linking, but not those already covered by
+# Requires.private.
+   echo 'Libs.private: $(sort $(filter-out -L.% -L$(top_srcdir)/%,$(filter 
-L%,$(LDFLAGS) $(SHLIB_LINK $(filter-out 
$(PKG_CONFIG_REQUIRES_PRIVATE:lib%=-l%),$(filter -l%,$(SHLIB_LINK)))' $@
+
+
 # We need several not-quite-identical variants of .DEF files to build
 # DLLs for Windows.  These are made from the single source file
 # exports.txt.  Since we can't assume that 

Re: [HACKERS] obsolete code

2013-02-01 Thread Andrew Dunstan


On 02/01/2013 11:20 AM, Tom Lane wrote:


I'm not really thrilled with switching the default assumption to be V1,
especially not if we implement that by telling authors they can stop
bothering with the macros.  The pain will just come back sometime in the
future when we decide we need a function API V2.  (I'm actually a bit
surprised V1 has lived this long without changes.)

Here's a different straw-man proposal: let's start requiring *all*
external C functions to have an API-version block.  We can add a
PG_FUNCTION_INFO_V0(myfunc) macro for those people who are still using
V0 and don't feel like changing their code (and you know they're out
there).  For the rest of us, this would allow emitting an appropriate
error when we forget the macro.



Sounds like a good plan.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/PgSQL STRICT

2013-02-01 Thread Peter Eisentraut
On 1/26/13 11:28 AM, Marko Tiikkaja wrote:
 On Fri, 21 Dec 2012 16:14:19 +0100, I wrote:
 I wrote a patch
 which allows you to add STRICT into PERFORM and INSERT/UPDATE/DELETE
 without specifying an INTO clause.
 
 Here's the second version of the patch, addressing the syntax issues.

I think the new syntax is horribly ugly.  The actual command name should
always come first, not options.  What will happen if people add more
options along this line?

 I also couldn't make the grammar work with PERFORM STRICT, so I allowed 
 STRICT SELECT  instead.

I don't quite understand the reason for distinguishing PERFORM and
SELECT, but what you are proposing will make this even more confusing.


That said, I don't quite believe in the premise for this patch to begin
with.  The supposed analogy is with INTO STRICT.  But that is
effectively a variable assignment and checks whether that assignment was
performed correctly.  So for scalar variables, this checks that exactly
one value was returned.  I'd imagine if we allowed a syntax like ...
INTO (a, b, c), (d, e, f) it would check that exactly two rows were
returned.  So this clause basically just ensures that the run-time
behavior is consistent with the appearance of the source code.

What you are now proposing is that STRICT means one, but that's just
an opinion.  The SQL standard recognizes that updating or deleting
nothing is a noteworthy condition, so I could get partially behind this
patch if it just errored out when zero rows are affected, but insisting
on one is just arbitrary.

(Note also that elsewhere STRICT means something like not null, so the
term is getting a bit overloaded.)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Bruce Momjian
On Fri, Feb  1, 2013 at 12:37:21PM -0300, Alvaro Herrera wrote:
 Pavan Deolasee escribió:
 
  While looking at this particular case on -general, I realized that there is
  no way to *only* disable auto-analyze on a table. While one can cheat like
  what I suggested to the OP by setting threshold very high, I think it will
  be useful to be able to just off analyze. In this particular case, the OP
  is inserting and then deleting the same rows from the parent table, thus
  keeping it almost empty. Of course, he would want to run auto-vacuum on the
  table to remove the dead rows. Usually auto-analyze would have returned
  quite fast, especially because we vacuum a table first and then analyze it.
  But in this case, since the table is a parent of a number of large child
  tables, we end up analyzing the child tables too, which takes significantly
  longer time and is quite unnecessary because in this case the activity on
  the parent table must not have changed any stats for the child tables.
  
  A new reloption such as autovacuum_analyze_enabled is what we need.
 
 I was thinking in this option just three days ago, so yeah.
 
 I think we also want an option to turn off just vacuum.

Are these TODO items?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker)

2013-02-01 Thread Pavan Deolasee
On Fri, Feb 1, 2013 at 10:53 PM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Feb  1, 2013 at 12:37:21PM -0300, Alvaro Herrera wrote:

  A new reloption such as autovacuum_analyze_enabled is what we need.

 I was thinking in this option just three days ago, so yeah.

 I think we also want an option to turn off just vacuum.

 Are these TODO items?


I can write a patch in next couple of days if we are willing to accept
for this release. I think it should be fairly easy and non-intrusive.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY FREEZE has no warning

2013-02-01 Thread Bruce Momjian
On Tue, Jan 29, 2013 at 08:34:24PM -0500, Noah Misch wrote:
 On Fri, Jan 25, 2013 at 11:28:58PM -0500, Bruce Momjian wrote:
  On Fri, Jan 25, 2013 at 11:08:56PM -0500, Tom Lane wrote:
   Bruce Momjian br...@momjian.us writes:
!   ereport(ERROR,
!   
(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE,
!   errmsg(cannot perform 
FREEZE because of previous table activity in the current 
transaction)));
   
   [ itch... ]  What is table activity?  I always thought of tables as
   being rather passive objects.  And anyway, isn't this backwards?  What
   we're complaining of is *lack* of activity.  I don't see why this isn't
   using the same message as the other code path, namely
  
  Well, here is an example of this message:
  
  BEGIN;
  TRUNCATE vistest;
  SAVEPOINT s1;
  COPY vistest FROM stdin CSV FREEZE;
  ERROR:  cannot perform FREEZE because of previous table activity in the 
  current transaction
  COMMIT;
  
  Clearly it was truncated in the same transaction, but the savepoint
  somehow invalidates the freeze.  There is a C comment about it:
 
 The savepoint prevents the COPY FREEZE, because COPY FREEZE needs the table to
 have been created or truncated in the current *sub*transaction.  Issuing
 RELEASE s1 before the COPY makes it work again, for example.
 
  
   * BEGIN;
   * TRUNCATE t;
   * SAVEPOINT save;
   * TRUNCATE t;
   * ROLLBACK TO save;
   * COPY ...
 
 This is different.  The table was truncated in the current subtransaction, and
 it's safe in principle to apply the optimization.  Due to an implementation
 artifact, we'll reject it anyway.

OK, so, should we change the error message:

cannot perform FREEZE because of transaction activity after table
creation or truncation

to

cannot perform FREEZE because the table was not created or
truncated in the current subtransaction

or do we need to keep the transaction activity weasel wording because
of the second case you listed above?  I am suspecting the later.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/PgSQL STRICT

2013-02-01 Thread Marko Tiikkaja
On Fri, 01 Feb 2013 18:11:13 +0100, Peter Eisentraut pete...@gmx.net  
wrote:

On 1/26/13 11:28 AM, Marko Tiikkaja wrote:

Here's the second version of the patch, addressing the syntax issues.


I think the new syntax is horribly ugly.  The actual command name should
always come first, not options.  What will happen if people add more
options along this line?


WITH foo AS (..) SELECT ..;  doesn't have the command first either.

I don't really see what other plpgsql-specific options we would add..


I also couldn't make the grammar work with PERFORM STRICT, so I allowed
STRICT SELECT  instead.


I don't quite understand the reason for distinguishing PERFORM and
SELECT, but what you are proposing will make this even more confusing.


That said, I don't quite believe in the premise for this patch to begin
with.  The supposed analogy is with INTO STRICT.  But that is
effectively a variable assignment and checks whether that assignment was
performed correctly.  So for scalar variables, this checks that exactly
one value was returned.  I'd imagine if we allowed a syntax like ...
INTO (a, b, c), (d, e, f) it would check that exactly two rows were
returned.  So this clause basically just ensures that the run-time
behavior is consistent with the appearance of the source code.


Fine, I can see why you see it that way.


What you are now proposing is that STRICT means one, but that's just
an opinion.  The SQL standard recognizes that updating or deleting
nothing is a noteworthy condition, so I could get partially behind this
patch if it just errored out when zero rows are affected, but insisting
on one is just arbitrary.


*shrug*

To me, this makes the most sense.  In my experience if you know something  
should be there, it's exactly one row and not one or more.  Not throwing  
an error on more than one would make this feature a lot less useful in  
practice.




Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GetOldestXmin going backwards is dangerous after all

2013-02-01 Thread Tom Lane
I've been able to reproduce the problem reported by Pius Chan in bug
#7819.  With some logging added that prints the OldestXmin values used
by vacuum and cluster operations, the reason is fairly clear:

2013-02-01 13:41:12 EST 8011 LOG:  vacuuming test with OldestXmin 1760160 
FreezeLimit 4246727456
2013-02-01 13:41:14 EST 8011 LOG:  automatic vacuum of table 
test.public.test: index scans: 1
pages: 0 removed, 635 remain
tuples: 48895 removed, 91 remain
system usage: CPU 0.00s/0.03u sec elapsed 1.64 sec
2013-02-01 13:41:14 EST 8011 LOG:  automatic analyze of table 
test.public.test system usage: CPU 0.00s/0.00u sec elapsed 0.09 sec
2013-02-01 13:41:14 EST 8011 LOG:  vacuuming pg_toast_115435 with OldestXmin 
1761518 FreezeLimit 4246728814
2013-02-01 13:41:57 EST 8011 LOG:  automatic vacuum of table 
test.pg_toast.pg_toast_115435: index scans: 1
pages: 0 removed, 12904 remain
tuples: 7624 removed, 43992 remain
system usage: CPU 0.21s/0.13u sec elapsed 42.79 sec
2013-02-01 13:41:58 EST 7025 LOG:  clustering test with OldestXmin 1752719 
FreezeXid 1630979
2013-02-01 13:41:58 EST 7025 ERROR:  missing chunk number 0 for toast value 
215828 in pg_toast_115435

IOW, autovacuum worker 8011 vacuumed test with one xmin, and then a
bit later vacuumed its toast table with a slightly newer xmin.  This
means that certain tuples' toast tuples could get cleaned out of the
toast table while the parent tuples are still there in the heap.
Normally this is fine because the parent tuples are certainly dead and
beyond snapshot range, so no one will make any attempt to fetch their
toast values.  But then, here comes CLUSTER with an older xmin ---
so it thinks it needs to copy over some of those dead-but-possibly-
still-visible-to-somebody tuples, and kaboom.

Now, CLUSTER is very careful to make sure that it gets a lock on the
toast table before computing OldestXmin, with the express intent of
preventing this type of problem (cf comment in copy_heap_data).
However, if the result of GetOldestXmin() can ever go backwards,
that doesn't help.  And guess what ... it can.

The header comment for GetOldestXmin() contains a long rationalization
for why it's safe for the result to sometimes go backwards, but now
that I've seen this example I'm pretty sure that that's all horsepucky.

I think we need to either ensure that GetOldestXmin is strictly
nondecreasing, or find some more-direct way of interlocking vacuuming
of toast tables with their parents.

We could possibly do the former by tracking the last-known value in
shared memory, but it'd be complicated because the value can
(legitimately) vary depending on which database you ask about.

The only backpatchable way I can think of to do the latter is to undo
the decoupling of main and toast-table vacuuming, and then say that
we simply don't change the value of OldestXmin when going to vacuum
the toast table.  This is a bit unpleasant.

In any case, I no longer have much faith in the idea that letting
GetOldestXmin go backwards is really safe.

The one bright spot is that AFAICS this isn't a true data loss hazard;
no possibly-still-needed data can be lost.  Any failures would be
transient failures of CLUSTER or VACUUM FULL.

Thoughts?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Tolerate timeline switches while pg_basebackup -X fetch is run

2013-02-01 Thread Robert Haas
On Tue, Jan 29, 2013 at 1:55 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Heikki Linnakangas wrote:
 Tolerate timeline switches while pg_basebackup -X fetch is running.

 I just noticed that this commit introduced a few error messages that
 have a file argument which is not properly quoted:

 +   ereport(ERROR,
 +   (errcode_for_file_access(),
 +errmsg(requested WAL segment %s has already been removed,
 +   filename)));

 +   ereport(ERROR,
 +   (errmsg(could not find WAL file %s, startfname)));

 The first one seems to come from e57cd7f0a16, which is pretty old so
 it's a bit strange that no one noticed.

 Not sure what to do here ... should we just update everything including
 the back branches, or just leave them alone and touch master only?

-1 from me on any message changes in the back-branches.  It's not
worth confusing large parsing software that's already out there, and
it's definitely not worth forcing people to make the regex contingent
on which *minor* version is in use.  But +1 for making it consistent
in HEAD.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Trigger

2013-02-01 Thread Robert Haas
On Wed, Jan 30, 2013 at 12:59 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 So please find attached to this email an implementation of the sql_drop
 event trigger, that refrains on exposing any new information to the
 users.

 Already a v1 of that patch, per comments from Álvaro I reuse the
 ObjectAddresses facility rather than building my own List of object
 addresses.

 Note that with that in place it's now easy to also add support for the
 DROP OWNED BY command, but that's left for a future patch as I expect
 some amount of discussion to go about it.

 Also, I removed the code that was doing de-deduplication of the object
 addresses we collect, now trusting performMultipleDeletions() not to
 screw us up. There's a use case that needs particular attention here,
 though:

 DROP TABLE foo, foo;

 I'm not sure we want to deduplicate foo in the pg_dropped_objects()
 output in that case, so I've not done so in this version of the patch.
 Also, Álvaro is concerned that the cost of deduplicating might be higher
 than what we want to take here.

Taking a first look at this, I think the idea of pg_dropped_objects()
is really pretty clever.  I like it.  I assure we will end up with
several functions of this type eventually, so it might be good to
adopt some kind of distinguishing naming convention for this type of
function.  pg_event_trigger_context_dropped_objects() seems far too
verbose, but that's the idea.

With this approach, there's no real need to introduce a new event
type.  We could just make ddl_command_end triggers able to use this,
and we're done.  The point of sql_drop was that it would have been
called once *per dropped object*, not once per command.  But,
actually, thinking about this, for something like Slony, exposing
pg_dropped_objects() to ddl_command_end triggers should be just as
good, and maybe a whole lot better, than what I was proposing.

Does it work for you to rip out sql_drop and just make
pg_dropped_objects() - perhaps renamed - available in ddl_command_end?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] cannot move relocatable extension out of pg_catalog schema

2013-02-01 Thread Peter Eisentraut
create extension hstore with schema pg_catalog;
alter extension hstore set schema public;
ERROR:  0A000: cannot remove dependency on schema pg_catalog because it
is a system object
drop extension hstore;  -- works

I've seen this happen cleaning up after mistakenly misplaced extensions.
 I suspect this is a bug.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot move relocatable extension out of pg_catalog schema

2013-02-01 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 create extension hstore with schema pg_catalog;
 alter extension hstore set schema public;
 ERROR:  0A000: cannot remove dependency on schema pg_catalog because it
 is a system object
 drop extension hstore;  -- works

 I've seen this happen cleaning up after mistakenly misplaced extensions.
  I suspect this is a bug.

It's not a bug, it's an intentional implementation restriction that
would be quite expensive to remove.

The reason it fails is that we don't record dependencies on system
objects, and therefore there's no way for ALTER EXTENSION to modify
those dependencies when trying to do SET SCHEMA.  That is, since
pg_catalog is pinned, we don't have any explicit record of which
objects in the extension would've needed dependencies on it, thus
no way to manufacture the dependencies on schema public that would
need to exist after the SET SCHEMA.

AFAICS the only maintainable fix would be to start storing dependencies
on pinned objects explicitly, which would make for enormous and 99.99%
useless bloat in pg_depend.

I wonder whether it'd not be a better idea to forbid specifying
pg_catalog as the target schema for relocatable extensions.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] some psql table output flaws

2013-02-01 Thread Peter Eisentraut
I have encountered two unrelated flaws in the psql table output.

First, when using unaligned vertical mode (\a \x on), there is always an
empty line after the last record.  This also means that an empty result
set prints an empty line, instead of nothing.

Second, when using aligned vertical mode (\x on), an empty result set
prints (No rows).  That's fine, but there is no way to turn this off.
 I intuitively attempted to use \t (tuples only), but that had no
effect.  \t doesn't really have a meaning in vertical mode, because the
field names are always printed, but I think it could/should have the
effect of shutting off footer lines.

Patch for both issues attached.
diff --git a/src/bin/psql/print.c b/src/bin/psql/print.c
index 2a34fb3..0722c98 100644
--- a/src/bin/psql/print.c
+++ b/src/bin/psql/print.c
@@ -439,10 +439,13 @@ static void IsPagerNeeded(const printTableContent *cont, 
const int extra_lines,
}
 
/* see above in print_unaligned_text() */
-   if (cont-opt-recordSep.separator_zero)
-   print_separator(cont-opt-recordSep, fout);
-   else
-   fputc('\n', fout);
+   if (need_recordsep)
+   {
+   if (cont-opt-recordSep.separator_zero)
+   print_separator(cont-opt-recordSep, fout);
+   else
+   fputc('\n', fout);
+   }
}
 }
 
@@ -1169,7 +1172,8 @@ static void IsPagerNeeded(const printTableContent *cont, 
const int extra_lines,
if (cont-cells[0] == NULL  cont-opt-start_table 
cont-opt-stop_table)
{
-   fprintf(fout, _((No rows)\n));
+   if (!opt_tuples_only)
+   fprintf(fout, _((No rows)\n));
return;
}
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] some psql table output flaws

2013-02-01 Thread Erik Rijkers
On Fri, February 1, 2013 21:22, Peter Eisentraut wrote:
 I have encountered two unrelated flaws in the psql table output.

 First, when using unaligned vertical mode (\a \x on), there is always an
 empty line after the last record.  This also means that an empty result
 set prints an empty line, instead of nothing.

 Second, when using aligned vertical mode (\x on), an empty result set
 prints (No rows).  That's fine, but there is no way to turn this off.
  I intuitively attempted to use \t (tuples only), but that had no
 effect.  \t doesn't really have a meaning in vertical mode, because the
 field names are always printed, but I think it could/should have the
 effect of shutting off footer lines.

 Patch for both issues attached.


+1

I'd be very glad not to have to 'grep -v' this stuff away all the time




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GetOldestXmin going backwards is dangerous after all

2013-02-01 Thread Robert Haas
On Fri, Feb 1, 2013 at 2:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 In any case, I no longer have much faith in the idea that letting
 GetOldestXmin go backwards is really safe.

That is admittedly kind of weird behavior, but I think one could
equally blame this on CLUSTER.  This is hardly the first time we've
had to patch CLUSTER's handling of TOAST tables (cf commits
21b446dd0927f8f2a187d9461a0d3f11db836f77,
7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05,
83b7584944b3a9df064cccac06822093f1a83793) and it doesn't seem unlikely
that we might go the full ten rounds.

Having said that, I agree that a fix in GetOldestXmin() would be nice
if we could find one, but since the comment describes at least three
different ways the value can move backwards, I'm not sure that there's
really a practical solution there, especially if you want something we
can back-patch.  I'm more inclined to think that we need to find a way
to make CLUSTER more robust against xmin regressions, but I confess I
don't have any good ideas about how to do that, either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [PATCH 1/5] Centralize Assert* macros into c.h so its common between backend/frontend

2013-02-01 Thread Alvaro Herrera
Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2013-01-18 10:33:16 -0500, Tom Lane wrote:
  Really I'd prefer not to move the backend definitions out of postgres.h
  at all, just because doing so will lose fifteen years of git history
  about those particular lines (or at least make it a lot harder to
  locate with git blame).
 
  The alternative seems to be sprinkling a few more #ifdef FRONTEND's in
  postgres.h, if thats preferred I can prepare a patch for that although I
  prefer my proposal.
 
 Yeah, surrounding the existing definitions with #ifndef FRONTEND was
 what I was imagining.  But on reflection that seems pretty darn ugly,
 especially if the corresponding FRONTEND definitions are far away.
 Maybe we should just live with the git history disconnect.
 
 Right at the moment the c.h location seems like the best bet.

Since there seems to be consensus that this is the best way to go, I
have committed it.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Robert Haas
On Thu, Jan 31, 2013 at 3:18 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 My intention was to apply a Nasby correction to Browne Strength and call
 the resulting function Browne' (Browne prime).  Does that sound better?

/me rests head in hands.  I'm not halfway clever enough to hang with
this crowd; I'm not even going to touch the puns in Chris' reply.

 Now seriously, I did experiment a bit with this and it seems to behave
 reasonably.  Of course, there might be problems with it, and I don't
 oppose to changing the name.  Vacuum strength didn't sound so great,
 so I picked the first term that came to mind.  It's not like picking
 people's last names to name stuff is a completely new idea; that said,
 it was sort of a joke.

I don't think I really understand the origin of the formula, so
perhaps if someone would try to characterize why it seems to behave
reasonably that would be helpful (at least to me).

 f(deadtuples, relpages, age) =
deadtuples/relpages + e ^ (age*ln(relpages)/2^32)

To maybe make that discussion go more quickly let me kvetch about a
few things to kick things off:

- Using deadtuples/relpages as part of the formula means that tables
with smaller tuples (thus more tuples per page) will tend to get
vacuumed before tables with larger tuples (thus less tuples per page).
 I can't immediately see why that's a good thing.

- It's probably important to have a formula where we can be sure that
the wrap-around term will eventually dominate the dead-tuple term,
with enough time to spare to make sure nothing really bad happens; on
the other hand, it's also desirable to avoid the case where a table
that has just crossed the threshold for wraparound vacuuming doesn't
immediately shoot to the top of the list even if it isn't truly
urgent.  It's unclear to me just from looking at this formula how well
the second term meets those goals.

- More generally, it seems to me that we ought to be trying to think
about the units in which these various quantities are measured.  Each
term ought to be unit-less.  So perhaps the first term ought to divide
dead tuples by total tuples, which has the nice property that the
result is a dimensionless quantity that never exceeds 1.0.  Then the
second term can be scaled somehow based on that value.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Jeff Janes
On Wed, Jan 30, 2013 at 6:55 AM, Andres Freund and...@2ndquadrant.com wrote:

 c.f.
 vacuum_set_xid_limits:
 /*
  * Determine the table freeze age to use: as specified by the 
 caller,
  * or vacuum_freeze_table_age, but in any case not more than
  * autovacuum_freeze_max_age * 0.95, so that if you have e.g 
 nightly
  * VACUUM schedule, the nightly VACUUM gets a chance to 
 freeze tuples
  * before anti-wraparound autovacuum is launched.
  */
 freezetable = freeze_min_age;
 if (freezetable  0)
 freezetable = vacuum_freeze_table_age;
 freezetable = Min(freezetable, autovacuum_freeze_max_age * 
 0.95);
 Assert(freezetable = 0);

 /*
  * Compute the cutoff XID, being careful not to generate a 
 permanent
  * XID.
  */
 limit = ReadNewTransactionId() - freezetable;
 if (!TransactionIdIsNormal(limit))
 limit = FirstNormalTransactionId;

 *freezeTableLimit = limit;

 lazy_vacuum_rel:
 scan_all = TransactionIdPrecedesOrEquals(onerel-rd_rel-relfrozenxid,
  freezeTableLimit);

 If youre careful you can also notice that there is an interesting typo
 in the freeze table computation. Namely it uses freeze_min_age instead
 of freeze_table_age. Which probably explains why I had so bad
 performance results with lowering vacuum_freeze_min_age, it basically
 radically increases the amount of full-table-scans, far more than it
 should.

 I can't imagine that anybody with a large database ran pg successfully
 with a small freeze_min_age due to this.

As far as I can tell this bug kicks in when your cluster gets to be
older than freeze_min_age, and then lasts forever after.  After that
point pretty much every auto-vacuum inspired by update/deletion
activity will get promoted to a full table scan.  (Which makes me
wonder how much field-testing the vm-only vacuum has received, if it
was rarely happening in practice due to this bug.)

Lowering the setting of freeze_min_age does not make the bug worse, it
only makes it manifest earlier in the lifetime of the database.



Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-02-01 Thread Robert Haas
On Thu, Jan 31, 2013 at 7:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 On Thu, Jan 31, 2013 at 4:20 PM, Andrew Dunstan and...@dunslane.net wrote:
 On 01/31/2013 05:06 PM, Peter Eisentraut wrote:
 I would like to not create any - operators, so that that syntax could
 be used in the future for method invocation or something similar (it's
 in the SQL standard).

 This is the first time I have heard that we should stay away from this. We
 have operators with this name in hstore, which is why I chose it.

 I'm not happy about this either.  It's bad enough that we're thinking
 about taking away =, but to disallow - as well?  My inclination is to
 just say no, we're not implementing that.  Even if we remove the contrib
 operators named that way, it's insane to suppose that nobody has chosen
 these names for user-defined operators in their applications.

I think it's smarter for us to ship functions, and let users wrap them
in operators if they so choose.  It's not difficult for people who
want it to do it, and that way we dodge this whole mess.

The thing that was really awful about hstore's = operator is that it
was =(text, text), meaning that if somebody else invented, say,
xstore, they could not do the same thing that hstore did without
colliding with hstore.  I think we ought to have an ironclad rule that
any operators introduced in our core distribution for particular types
must have at least one argument of that type.  Otherwise, we'll end up
with a free-for-all where everyone tries to grab the good operator
names (of which there are only a handful) for their type, and types
we're adding five years from now will be stuck with no operator names
at all, or dumb stuff like .

But even leaving that aside, I'm surprised to hear so many people
dismissing SQL standards compliance so blithely.  We've certainly
spent a lot of blood, sweat, and tears on minor standards-compliance
issues over they years - why is it OK to not care about this
particular issue when we've spent so much effort caring about other
ones?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot move relocatable extension out of pg_catalog schema

2013-02-01 Thread Peter Eisentraut
On 2/1/13 3:21 PM, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
 create extension hstore with schema pg_catalog;
 alter extension hstore set schema public;
 ERROR:  0A000: cannot remove dependency on schema pg_catalog because it
 is a system object
 drop extension hstore;  -- works
 
 I've seen this happen cleaning up after mistakenly misplaced extensions.
  I suspect this is a bug.
 
 It's not a bug, it's an intentional implementation restriction that
 would be quite expensive to remove.
 
 The reason it fails is that we don't record dependencies on system
 objects, and therefore there's no way for ALTER EXTENSION to modify
 those dependencies when trying to do SET SCHEMA.  That is, since
 pg_catalog is pinned, we don't have any explicit record of which
 objects in the extension would've needed dependencies on it, thus
 no way to manufacture the dependencies on schema public that would
 need to exist after the SET SCHEMA.

Fair enough.  It's not that important.

 I wonder whether it'd not be a better idea to forbid specifying
 pg_catalog as the target schema for relocatable extensions.

But that would be important, I think.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code

2013-02-01 Thread Peter Eisentraut
On 2/1/13 11:06 AM, Andrew Dunstan wrote:
 My hope was that if we got rid of the old stuff we wouldn't need to use
 
PG_FUNCTION_INFO_V1(myfunc);

 in external modules any more

My fear with that would be that people would start forgetting this and
modules won't work with older PG versions anymore.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Andres Freund
On 2013-02-01 14:05:46 -0800, Jeff Janes wrote:
 On Wed, Jan 30, 2013 at 6:55 AM, Andres Freund and...@2ndquadrant.com wrote:
 
  c.f.
  vacuum_set_xid_limits:
  /*
   * Determine the table freeze age to use: as specified by 
  the caller,
   * or vacuum_freeze_table_age, but in any case not more than
   * autovacuum_freeze_max_age * 0.95, so that if you have 
  e.g nightly
   * VACUUM schedule, the nightly VACUUM gets a chance to 
  freeze tuples
   * before anti-wraparound autovacuum is launched.
   */
  freezetable = freeze_min_age;
  if (freezetable  0)
  freezetable = vacuum_freeze_table_age;
  freezetable = Min(freezetable, autovacuum_freeze_max_age * 
  0.95);
  Assert(freezetable = 0);
 
  /*
   * Compute the cutoff XID, being careful not to generate a 
  permanent
   * XID.
   */
  limit = ReadNewTransactionId() - freezetable;
  if (!TransactionIdIsNormal(limit))
  limit = FirstNormalTransactionId;
 
  *freezeTableLimit = limit;
 
  lazy_vacuum_rel:
  scan_all = 
  TransactionIdPrecedesOrEquals(onerel-rd_rel-relfrozenxid,
   freezeTableLimit);
 
  If youre careful you can also notice that there is an interesting typo
  in the freeze table computation. Namely it uses freeze_min_age instead
  of freeze_table_age. Which probably explains why I had so bad
  performance results with lowering vacuum_freeze_min_age, it basically
  radically increases the amount of full-table-scans, far more than it
  should.
 
  I can't imagine that anybody with a large database ran pg successfully
  with a small freeze_min_age due to this.
 
 As far as I can tell this bug kicks in when your cluster gets to be
 older than freeze_min_age, and then lasts forever after.  After that
 point pretty much every auto-vacuum inspired by update/deletion
 activity will get promoted to a full table scan.  (Which makes me
 wonder how much field-testing the vm-only vacuum has received, if it
 was rarely happening in practice due to this bug.)

I think you're misreading the code. freezeTableLimit is calculated by
  limit = ReadNewTransactionId() - freezetable;
which is always relative to the current xid. The bug was that
freezetable had the wrong value in autovac due to freeze_min_age being
used instead of freeze_table_age.


Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-02-01 Thread Daniel Farina
On Fri, Feb 1, 2013 at 2:08 PM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Jan 31, 2013 at 7:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 On Thu, Jan 31, 2013 at 4:20 PM, Andrew Dunstan and...@dunslane.net wrote:
 On 01/31/2013 05:06 PM, Peter Eisentraut wrote:
 I would like to not create any - operators, so that that syntax could
 be used in the future for method invocation or something similar (it's
 in the SQL standard).

 This is the first time I have heard that we should stay away from this. We
 have operators with this name in hstore, which is why I chose it.

 I'm not happy about this either.  It's bad enough that we're thinking
 about taking away =, but to disallow - as well?  My inclination is to
 just say no, we're not implementing that.  Even if we remove the contrib
 operators named that way, it's insane to suppose that nobody has chosen
 these names for user-defined operators in their applications.

 I think it's smarter for us to ship functions, and let users wrap them
 in operators if they so choose.  It's not difficult for people who
 want it to do it, and that way we dodge this whole mess.

Normally I'd agree with you, but I think there's a complexity here
that is very frown-inducing, although I'm not positively inclined to
state that it's so great that your suggested solution is not the least
of all evils:

  http://www.postgresql.org/message-id/8551.1331580...@sss.pgh.pa.us

The problem being: even though pg_operator resolves to functions in
pg_proc, they have distinct identities as far as the planner is
concerned w.r.t selectivity estimation and index selection.  Already
there is a slight hazard that some ORMs that want to grow hstore
support will spell it fetchval and others -.  So far, infix
syntax seems to be the common default, but a minor disagreement among
ORMs or different user preferences will be destructive.

Another way to look at this is that by depriving people multiple
choices in the default install, that hazard goes away.  But it also
means that, practically, CREATE OPERATOR is going to be hazardous to
use because almost all software is probably not going to assume the
existence of any non-default installed operators for JSON, and those
that do will not receive the benefit of indexes from software using
the other notation.  So, I think that if one takes the 'when in doubt,
leave it out' approach you seem to be proposing, I also think that
very little profitable use of CREATE OPERATOR will take place -- ORMs
et al will choose the lowest common denominator (for good sensible
reason) and flirting with CREATE OPERATOR will probably cause
surprising plans, so I doubt it'll take hold.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Christopher Browne
On Fri, Feb 1, 2013 at 4:59 PM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Jan 31, 2013 at 3:18 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
 My intention was to apply a Nasby correction to Browne Strength and call
 the resulting function Browne' (Browne prime).  Does that sound better?

 /me rests head in hands.  I'm not halfway clever enough to hang with
 this crowd; I'm not even going to touch the puns in Chris' reply.

It's Friday... Fun needs to be had :-).

 Now seriously, I did experiment a bit with this and it seems to behave
 reasonably.  Of course, there might be problems with it, and I don't
 oppose to changing the name.  Vacuum strength didn't sound so great,
 so I picked the first term that came to mind.  It's not like picking
 people's last names to name stuff is a completely new idea; that said,
 it was sort of a joke.

 I don't think I really understand the origin of the formula, so
 perhaps if someone would try to characterize why it seems to behave
 reasonably that would be helpful (at least to me).

 f(deadtuples, relpages, age) =
deadtuples/relpages + e ^ (age*ln(relpages)/2^32)

 To maybe make that discussion go more quickly let me kvetch about a
 few things to kick things off:

 - Using deadtuples/relpages as part of the formula means that tables
 with smaller tuples (thus more tuples per page) will tend to get
 vacuumed before tables with larger tuples (thus less tuples per page).
  I can't immediately see why that's a good thing.

That wasn't intentional, and may be somewhat unfortunate.

I picked values that I knew could be easily grabbed, and we don't
have an immediate tuples-per-page estimate on pg_class.  An
estimate should be available in pg_statistic; I'm not sure that the
bias from this hurts things badly.

 - It's probably important to have a formula where we can be sure that
 the wrap-around term will eventually dominate the dead-tuple term,
 with enough time to spare to make sure nothing really bad happens; on
 the other hand, it's also desirable to avoid the case where a table
 that has just crossed the threshold for wraparound vacuuming doesn't
 immediately shoot to the top of the list even if it isn't truly
 urgent.  It's unclear to me just from looking at this formula how well
 the second term meets those goals.

I think the second term *does* provide a way for wraparound to dominate;
splitting it apart a bit...

Consider...

   age * ln(relpages)
e^ --
  2^32

The wraparound portion of this involves age/2^32...  In the beginning, the
numerator will be near zero, and denominator near 2 billion, so is roughly
1.  As age trends towards 2^32, the fraction (ignoring ln(relpages)) trends
towards 1, so that the longer we go without vacuuming, the more certain
that the fraction indicates a value near 1.  That *tends* to give you something
looking like e^1, or 2.71828+, ignoring the relpages part.

I threw in multiplying by ln(relpages) as a way to step Well Back from
rollover; that means that this term will start growing considerably before
rollover, and the larger the table, the sooner that growth takes place.

There is a problem with the ln(relpages) term; if the table has just 1 page,
the ln(relpages) = 0 so the value of the exponential term is *always* 1.
Probably should have ln(relpages+CONSTANT) so that we guarantee
the numerator is never 0.

I'm a bit worried that the exponential term might dominate *too* quickly.

For a table I have handy with 163K tuples, spread across 3357 pages,
ln(relpage) = 8.1188, and the range of the exponential bit travels
like follows:

dotpro0620@localhost-  select generate_series(1,20)*100/20 as
percent_wraparound, power(2.71828,
(65536.0*32768.0*generate_series(1,20)/20.0 *
ln(3357))/(65536.0*32768)) as wraparound_term;
 percent_wraparound | wraparound_term
+--
  5 | 1.50071232210687
 10 |  2.2521374737234
 15 | 3.37981045789535
 20 | 5.07212320054923
 25 | 7.61179778630838
 30 | 11.4231187312988
 35 | 17.1428150369499
 40 | 25.7264337615498
 45 |  38.607976149824
 50 | 57.9394655396491
 55 |  86.950469871638
 60 |  130.48764154935
 65 | 195.824411555774
 70 | 293.876107391077
 75 | 441.023495534592
 80 | 661.849394087408
 85 |  993.24554108594
 90 | 1490.57582238538
 95 | 2236.92550368832
100 | 3356.98166702019
(20 rows)

At the beginning, the wraparound portion is just 1.5, so easily dominated by
a table with a lot of dead tuples.  As the time to wraparound declines,
the term becomes steadily more urgent.  There may be constants
factors to fiddle with at the edges, but this term definitely heads towards
dominance.

That's 

Re: [HACKERS] proposal - assign result of query to psql variable

2013-02-01 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 here is patch related to your proposal

I looked at this a bit.  It's getting there, though I still don't trust
the places where you've chosen to clear the prefix setting.  (Looking at
it, I'm now not sure that I trust the implementation of \g either.)

However, what I wanted to ask about was this:

 + if (PQgetisnull(result, 0, i))
 + value = pset.popt.nullPrint ? 
 pset.popt.nullPrint : ;
 + else
 + value = PQgetvalue(result, 0, i);

What's the argument for using nullPrint here?  ISTM that's effectively a
form of escaping, and I'd not expect that to get applied to values going
into variables, any more than any other formatting we do when printing
results.

Admittedly, if we just take the PQgetvalue result blindly, there'll
be no way to tell the difference between empty-string and NULL results.
But I'm not convinced that this approach is better.  It would certainly
need more than no documentation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-02-01 Thread Tom Lane
Daniel Farina dan...@heroku.com writes:
 On Fri, Feb 1, 2013 at 2:08 PM, Robert Haas robertmh...@gmail.com wrote:
 I think it's smarter for us to ship functions, and let users wrap them
 in operators if they so choose.  It's not difficult for people who

 The problem being: even though pg_operator resolves to functions in
 pg_proc, they have distinct identities as far as the planner is
 concerned w.r.t selectivity estimation and index selection.

Yeah, this is surely not a workable policy unless we first move all
those planner smarts to apply to functions not operators.  And rewrite
all the index AM APIs to use functions not operators, too.  Now this is
something that's been a wish-list item right along, but actually doing
it has always looked like a great deal of work for rather small reward.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Tom Lane
Christopher Browne cbbro...@gmail.com writes:
 I picked values that I knew could be easily grabbed, and we don't
 have an immediate tuples-per-page estimate on pg_class.

Er, what?  reltuples/relpages is exactly that estimate --- in fact,
it's only because of historical accident that we don't store a single
float field with that ratio, rather than two fields.  Both the creation
and the usage of those numbers work explicitly with the ratio.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Jeff Janes
On Fri, Feb 1, 2013 at 2:34 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-02-01 14:05:46 -0800, Jeff Janes wrote:

 As far as I can tell this bug kicks in when your cluster gets to be
 older than freeze_min_age, and then lasts forever after.  After that
 point pretty much every auto-vacuum inspired by update/deletion
 activity will get promoted to a full table scan.  (Which makes me
 wonder how much field-testing the vm-only vacuum has received, if it
 was rarely happening in practice due to this bug.)

 I think you're misreading the code. freezeTableLimit is calculated by

  limit = ReadNewTransactionId() - freezetable;

 which is always relative to the current xid. The bug was that
 freezetable had the wrong value in autovac due to freeze_min_age being
 used instead of freeze_table_age.

Right.  Since freeze_min_age was mistakenly being used, the limit
would be 50 million in the past (rather than 150 million) under
defaults.  But since the last full-table vacuum, whenever that was,
used freeze_min_age for its intended purpose, that means the 50
million in the past *at the time of that last vacuum* is the highest
that relfrozenxid can be.  And that is going to be further back than
50 million from right now, so the vacuum will always be promoted to a
full scan.

I am not entirely sure of my logic above[1], but I'm depending on
empirical observation for my conclusion.  The attached patch emits a
log entry telling if scan_all is being used, and it always is used
(under the bug) once the database gets old enough.  Or at least, I've
never seen it not use scan_all after that point.

As an aside, it does seem like log_autovacuum_min_duration=0 should
log whether a scan_all was done, and if so what relfrozenxid got set
to.  But looking at where the log message is generated, I don't know
where to retrieve that info.


[1] I don't know why it is that a scan_all vacuum with a
freeze_min_age of 50m (or a freezeLimit of 50 million ago) will not
set relfrozenxid to a higher value than that if it discovers that it
can, but it doesn't seem to.


Cheers,

Jeff


autovac_log.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cascading replica waits for write on master to come up

2013-02-01 Thread Josh Berkus
(sending this again; it was eaten by the ethernet)

Heikki,

I thought this was only a 9.3 issue, but it turns out to be
reproduceable on 9.2.2.  Basically, I did:

1. master is queicent ... no writes occuring.
2. createded cascading replica (reprep1) from replica (repmaster)
3. reprep1 remains in recovery mode until a write occurs on master

I've been able to reproduce this several times on my laptop using
postmasters running on different ports.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] issues with range types, btree_gist and constraints

2013-02-01 Thread Tomas Vondra

Hi,

I'm having trouble with range types and btree_gist - after some playing 
I believe it's
caused by a bug in how btree_gist handles text columns. All this is on 
freshly compiled

9.2.2.

I'm trying to achieve almost exactly what's described in the second 
example on


   
http://www.postgresql.org/docs/9.2/interactive/rangetypes.html#RANGETYPES-CONSTRAINT


i.e. I maintaining a list of ranges for each ID, except that I'm using 
text instead of

integers for an ID. so the table looks like this:

=
CREATE TABLE test (
idTEXT,
validity  TSRANGE NOT NULL DEFAULT tsrange('-infinity'::timestamp, 
'infinity'::timestamp),
CONSTRAINT test_validity_check EXCLUDE USING GIST (id WITH =, 
validity WITH )

);
=

This table is repeatedly filled with new versions of the data (which 
were removed from
the demo for sake of simplicity), so I've defined a trigger that checks 
if there's a

range with overlapping range, and split the range accordingly.

Each record starts with validity=[-infinity, infinity). On the first 
update this would
be split into [-infinity, now()) and [now(), infinity) and so on. This 
is what the following

trigger should do:

=
CREATE OR REPLACE FUNCTION test_close() RETURNS trigger AS $$
BEGIN

-- close the previous record (set upper bound of the range)
UPDATE test SET validity = tsrange(lower(validity), 
now()::timestamp)

 WHERE id = NEW.id AND (upper(validity) = 'infinity'::timestamp);

-- if there was a preceding record, set the lower bound (otherwise 
use unbounded range)

IF FOUND THEN
NEW.validity := tsrange(now()::timestamp, 
'infinity'::timestamp);

END IF;

RETURN NEW;

END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER test_close BEFORE INSERT ON test FOR EACH ROW EXECUTE 
PROCEDURE test_close();

=

To generate the sample data, do this:

=
echo SimpleTestString  /tmp/data.csv
for f in `seq 1 2`; do
echo $f  /tmp/x.log;
md5sum /tmp/x.log | awk '{print $1}'  /tmp/data.csv;
done;
=

The first line (combination of upper and lower-case letters) is what 
seems to trigger the
behavior. Now load the file into the table repeatedly, and you'll 
eventually get this error


=
db=# copy test(id) from '/tmp/data.csv';
COPY 10001
db=# copy test(id) from '/tmp/data.csv';
COPY 10001
db=# copy test(id) from '/tmp/data.csv';
ERROR:  conflicting key value violates exclusion constraint 
test_validity_check
DETAIL:  Key (id, validity)=(SimpleTestString, [2013-02-01 
23:32:04.329975,infinity))
 conflicts with existing key (id, validity)=(SimpleTestString, 
[-infinity,infinity)).

CONTEXT:  COPY test, line 1: SimpleTestString
=

The number of necessary COPY executions varies - what's even stranger 
is the result of

this select once it fails:

=
test=# select * from test where id = 'SimpleTestString';
id|   validity
--+--
 SimpleTestString | [-infinity,infinity)
 SimpleTestString | [-infinity,infinity)
(2 rows)
=

Yup, there are two overlapping ranges for the same ID. Moreover after 
disabling bitmap and
index scans, the COPY takes much longer but works just fine (including 
the trigger).

Creating a plain b-tree index on the ID column seems to fix that too.

That leads me to the belief that this is a bug in the GIST indexing, 
and the variations
are probably caused by the index scan kicking in after one of the COPY 
executions (and

reaching some threshold). I'm using en_US.UTF-8 for the database.

By replacing the infinity with a plain NULL (in the table and 
trigger), it fails too,
but in a slightly different way. For example I'm seeing this after the 
failure:


=
test=# select * from test where id = 'SimpleTest';
 id |validity
+-
 SimpleTest | (,2013-02-02 00:07:07.038324)
(1 row)

test=# set enable_bitmapscan to off;
SET
test=# set enable_indexscan to off;
SET
test=# select * from test where id = 'SimpleTest';
 id |validity

Re: [HACKERS] GetOldestXmin going backwards is dangerous after all

2013-02-01 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Feb 1, 2013 at 2:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 In any case, I no longer have much faith in the idea that letting
 GetOldestXmin go backwards is really safe.

 That is admittedly kind of weird behavior, but I think one could
 equally blame this on CLUSTER.  This is hardly the first time we've
 had to patch CLUSTER's handling of TOAST tables (cf commits
 21b446dd0927f8f2a187d9461a0d3f11db836f77,
 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05,
 83b7584944b3a9df064cccac06822093f1a83793) and it doesn't seem unlikely
 that we might go the full ten rounds.

Yeah, but I'm not sure whether CLUSTER is the appropriate blamee or
whether it's more like the canary in the coal mine, first to expose
problematic behaviors elsewhere.  The general problem here is really
that we're cleaning out toast tuples while the referencing main-heap
tuple still physically exists.  How safe do you think that is?  That
did not ever happen before we decoupled autovacuuming of main and toast
tables, either --- so a good case could be made that that idea is
fundamentally broken.

 Having said that, I agree that a fix in GetOldestXmin() would be nice
 if we could find one, but since the comment describes at least three
 different ways the value can move backwards, I'm not sure that there's
 really a practical solution there, especially if you want something we
 can back-patch.

Well, if we were tracking the latest value in shared memory, we could
certainly clamp to that to ensure it didn't go backwards.  The problem
is where to find storage for a per-DB value.

I thought about storing each session's latest value in its PGPROC and
taking the max over same-DB sessions during GetOldestXmin's ProcArray
scan, but that doesn't work because an autovacuum process might
disappear and thus destroy the needed info just before CLUSTER looks
for it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GetOldestXmin going backwards is dangerous after all

2013-02-01 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Having said that, I agree that a fix in GetOldestXmin() would be nice
 if we could find one, but since the comment describes at least three
 different ways the value can move backwards, I'm not sure that there's
 really a practical solution there, especially if you want something we
 can back-patch.

Actually, wait a second.  As you say, the comment describes three known
ways to make it go backwards.  It strikes me that all three are fixable:

 * if allDbs is FALSE and there are no transactions running in the current
 * database, GetOldestXmin() returns latestCompletedXid. If a transaction
 * begins after that, its xmin will include in-progress transactions in other
 * databases that started earlier, so another call will return a lower value.

The reason this is a problem is that GetOldestXmin ignores XIDs of
processes that are connected to other DBs.  It now seems to me that this
is a flat-out bug.  It can ignore their xmins, but it should include
their XIDs, because the point of considering those XIDs is that they may
contribute to the xmins of snapshots computed in the future by processes
in our own DB.  And snapshots never exclude any XIDs on the basis of
which DB they're in.  (They can't really, since we can't know when the
snap is taken whether it might be used to examine shared catalogs.)

 * There are also replication-related effects: a walsender
 * process can set its xmin based on transactions that are no longer running
 * in the master but are still being replayed on the standby, thus possibly
 * making the GetOldestXmin reading go backwards.  In this case there is a
 * possibility that we lose data that the standby would like to have, but
 * there is little we can do about that --- data is only protected if the
 * walsender runs continuously while queries are executed on the standby.
 * (The Hot Standby code deals with such cases by failing standby queries
 * that needed to access already-removed data, so there's no integrity bug.)

This is just bogus.  Why don't we make it a requirement on walsenders
that they never move their advertised xmin backwards (or initially set
it to less than the prevailing global xmin)?  There's no real benefit to
allowing them to try to move the global xmin backwards, because any data
that they might hope to protect that way could be gone already.

 * The return value is also adjusted with vacuum_defer_cleanup_age, so
 * increasing that setting on the fly is another easy way to make
 * GetOldestXmin() move backwards, with no consequences for data integrity.

And as for that, it's been pretty clear for awhile that allowing
vacuum_defer_cleanup_age to change on the fly was a bad idea we'd
eventually have to undo.  The day of reckoning has arrived: it needs
to be PGC_POSTMASTER.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-02-01 Thread Jeff Janes
On Monday, January 28, 2013, Kevin Grittner wrote:

 IMO, anything which changes an anti-wraparound vacuum of a
 bulk-loaded table from read the entire table and rewrite nearly
 the complete table with WAL-logging to rewriting a smaller portion
 of the table with WAL-logging is an improvement.  Anyone who has
 run an OLTP load on a database which was loaded from pg_dump output
 or other bulk load processes, has probably experienced the pain
 related to the WAL-logged rewrite of massive quantities of data.


pgbench seems to be the OLTP load par excellence (or perhaps ad nauseum).

What other set up is needed to get it to reproduce this problem?  Do we
just do a dump/restore in lieu of pgbench -i?

Cheers,

Jeff