Re: [HACKERS] pg_dump --pretty-print-views

2013-01-30 Thread Marko Tiikkaja

On 1/30/13 7:52 AM, Jeevan Chalke wrote:

Looks good this time.

Will mark Ready for committor


Thanks for reviewing it more carefully than I did! :-)  And my apologies 
for the confusion earlier.



However, I am not sure about putting WRAP_COLUMN_DEFAULT by default. In
my opinion we should put that by default but other people may object so I
will keep that in code committors plate.


I have no opinion on this to one way or the other, so I'm fine with 
waiting for someone else (possibly the committer) to decide this.



Regards,
Marko Tiikkaja



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Andres Freund
On 2013-01-29 16:09:52 +1100, Josh Berkus wrote:
 
  I have to admit, I fail to see why this is a good idea. There isn't much
  of an efficiency bonus in freezing early (due to hint bits) and vacuums
  over vacuum_freeze_table_age are considerably more expensive as they
  have to scan the whole heap instead of using the visibilitymap. And if
  you don't vacuum the whole heap you can't lower relfrozenxid. So
  changing freeze_min_age doesn't help at all to avoid anti-wraparound
  vacuums.
  
  Am I missing something?
 
 Yep.  First, you're confusing vacuum_freeze_table_age  and
 vacuum_freeze_min_age.

Don't think I did. I was talking about vacuum_freeze_table_age because
that influences the amount of full-table scans in contrast to ones using
the vm. Thats an independent thing from anti-wraparound vacuums which
are triggered by autovacuum_freeze_max_age.

The point I was trying to make is that a very big part of the load is
not actually the freezing itself but the full-table vacuums which are
triggered by freeze_table_age.

  Second, you're not doing any arithmatic.

Because its not actually as easy to calculate as you make it seem.

Even in the case of a large insert-only table you have way much more
complex behaviour than what you describe.
The lifetime of tuples/buffers in that context is approx the following:
- inserted
- written by bgwriter or by checkpoint
- vacuum reads the non-all-visible part of the table
  - vacuum sets HEAP_XMIN_COMMITTED
- freeze_table_age vacuum reads the whole table
  - doesn't find anything because of freeze_min_age
- freeze_table_age vacuum reads the whole table
  - freezes tuple because = freeze_min_age
- freeze_table_age vacuum reads the whole table
  - doesn't change anything in our page because its already frozen

So the big point your computation is missing is that all those
anti-wraparound vacuums a) might not even happen due to normal vacuums
being over freeze_table_age which change the relfrozenxid b) don't
rewrite the whole table because the tuples actually are already frozen
c) will be written out a page repeatedly because of tuples that get
changed again d) incur full page writes.

 Let's do this by example.  TableA is a large table which receives an
 almost constant stream of individual row updates, inserts, and deletes.
 
 DEFAULTS:
 
 XID 1:  First rows in TableA are updated.
 XID 200m: Anti-wraparound autovac of TableA.
  All XIDs older than XID 100m set to FROZENXID.

Between those the table will have been vacuumed already and depending on
the schedule the tuples will already have been vacuumed due to
freeze_min_age being 50mio and freeze_table_age being 150mio. Before
that all the tuples will already have been written another time for hint
bit writes.

 XID 300m: Anti-wraparound autovac of TableA
  All XIDs older than XID 200M set to FROZENXID.

Only the newer tuples are going to be rewritten, the older parts of the
table will only be read.

 XID 400m: Anti-wraparound autovac of TableA
  All XIDs older than XID 300M set to FROZENXID.
 XID 500m: Anti-wraparound autovac of TableA
  All XIDs older than XID 400M set to FROZENXID.
 XID 600m: Anti-wraparound autovac of TableA
  All XIDs older than XID 500M set to FROZENXID.

 vacuum_freeze_min_age = 1m
 
 XID 1:  First rows in TableA are updated.
 XID 200m: Anti-wraparound autovac of TableA.
  All XIDs older than XID 199m set to FROZENXID.

Even in an insert-only case the tuples will be written at least twice
before an anti-freeze-wraparound, often thrice:
- first checkpoint
- hint bit sets due to a normal vacuum
- frozen due to a full-table vacuum

But, as you assumed the table will also get deletes and updates the low
freeze age will mean that some tuples on a page will get frozen on each
vacuum that reads the page which incurs a full-page-write everytime the
some tuples are frozen as most of the time the last time the page was
touched will be before the last checkpoint happened. As the WAL is a
major bottleneck on a write-heavy server that can incur a pretty heft
global slowdown.
Its *good* to only freeze tuples once youre pretty damn sure it won't be
touched by actual data changes again. As full-table vacuums happen more
frequently than anti-freeze vacuums anyway the cost of actual
anti-freeze vacuums, should they happen because of a too busy
autovacuum, aren't a problem in itself.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-01-29 16:09:52 +1100, Josh Berkus wrote:

 I have to admit, I fail to see why this is a good idea. There
 isn't much of an efficiency bonus in freezing early (due to
 hint bits) and vacuums over vacuum_freeze_table_age are
 considerably more expensive as they have to scan the whole heap
 instead of using the visibilitymap. And if you don't vacuum the
 whole heap you can't lower relfrozenxid.  So changing
 freeze_min_age doesn't help at all to avoid anti-wraparound
 vacuums.

 Am I missing something?

 Yep.  First, you're confusing vacuum_freeze_table_age  and
 vacuum_freeze_min_age.

 Don't think I did. I was talking about vacuum_freeze_table_age
 because that influences the amount of full-table scans

Not any more than vacuum_freeze_min_age does.

http://www.postgresql.org/docs/9.2/interactive/routine-vacuuming.html#AUTOVACUUM

| a whole table sweep is forced if the table hasn't been fully
| scanned for vacuum_freeze_table_age minus vacuum_freeze_min_age
| transactions.

So reducing vacuum_freeze_min_age not only helps minimize the
writes that are needed when autovacuum needs to scan the entire
heap, but also decreases the frequency of those full-table scans.

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] sql_drop Event Trigger

2013-01-30 Thread Dimitri Fontaine
Hi,

I took the liberty to create a new thread for $subject, because the main
comments I've been receiving about Event Triggers at this point is how
difficult it is to try and follow our discussions about them.

In order for everybody interested to be able to easily get the important
bits of information from this patch series and review, I'm now going to
work on a wiki page that I will then update when needed.

The important messages you will need to refer to for more context in
this thread are:

  http://www.postgresql.org/message-id/m21udesaz3@2ndquadrant.fr
  
http://www.postgresql.org/message-id/ca+tgmozz6mxq5zx6dopc_xahvkdwhehgdfjeawsrns+n7e_...@mail.gmail.com

So please find attached to this email an implementation of the sql_drop
event trigger, that refrains on exposing any new information to the
users.

COLUMNS=72 git diff --stat master..
 doc/src/sgml/event-trigger.sgml |   98 +++-
 doc/src/sgml/func.sgml  |   47 +++-
 src/backend/catalog/dependency.c|7 +
 src/backend/commands/event_trigger.c|  233 ++-
 src/backend/tcop/utility.c  |   23 +-
 src/backend/utils/cache/evtcache.c  |2 +
 src/include/catalog/pg_proc.h   |4 +-
 src/include/commands/event_trigger.h|   18 ++
 src/include/utils/builtins.h|3 +
 src/include/utils/evtcache.h|3 +-
 src/test/regress/expected/event_trigger.out |   53 +
 src/test/regress/sql/event_trigger.sql  |   37 +++
 12 files changed, 519 insertions(+), 9 deletions(-)

The implementation follows Robert ideas in that we accumulate
information about objects we are dropping then provide it to the Event
Trigger User Function. The way to provide it is using a Set Returning
Function called pg_dropped_objects() and that is only available when
running a sql_drop event trigger.

This functions returns a set of classid, objid, objsubid (as in
pg_depend), but you have to remember that you can't use them for catalog
lookups as the objects are already dropped: we can't both decide not to
add any concurrency hazards, no new lookups and locking nor extra work
in dependency.c *and* get at the OID of the DROP CASCADEd objects before
the drop happens, as far as I understand it.

I hope to complement the information available in a follow-up patch,
where I intend to provide object name, id, kind, schema name and
operation name in all supported operations.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



sql_drop.0.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Andres Freund
On 2013-01-30 05:39:29 -0800, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
  On 2013-01-29 16:09:52 +1100, Josh Berkus wrote:
 
  I have to admit, I fail to see why this is a good idea. There
  isn't much of an efficiency bonus in freezing early (due to
  hint bits) and vacuums over vacuum_freeze_table_age are
  considerably more expensive as they have to scan the whole heap
  instead of using the visibilitymap. And if you don't vacuum the
  whole heap you can't lower relfrozenxid.  So changing
  freeze_min_age doesn't help at all to avoid anti-wraparound
  vacuums.
 
  Am I missing something?
 
  Yep.  First, you're confusing vacuum_freeze_table_age  and
  vacuum_freeze_min_age.
 
  Don't think I did. I was talking about vacuum_freeze_table_age
  because that influences the amount of full-table scans
 
 Not any more than vacuum_freeze_min_age does.

Well, vacuum_freeze_min_age is 50m while vacuum_freeze_table_age is
150m.

 http://www.postgresql.org/docs/9.2/interactive/routine-vacuuming.html#AUTOVACUUM
 
 | a whole table sweep is forced if the table hasn't been fully
 | scanned for vacuum_freeze_table_age minus vacuum_freeze_min_age
 | transactions.
 
 So reducing vacuum_freeze_min_age not only helps minimize the
 writes that are needed when autovacuum needs to scan the entire
 heap, but also decreases the frequency of those full-table scans.

But it increases the amount of pages that are written out multiple times
because they contain tuples of different ages, in contrast to increasing
vacuum_freeze_table_age which doesn't have that problem. In combination
with full_page_writes that makes a noticeable different in total write
volume.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-01-30 Thread MauMau

From: Tom Lane t...@sss.pgh.pa.us

Since we've fixed a couple of relatively nasty bugs recently, the core
committee has determined that it'd be a good idea to push out PG update
releases soon.  The current plan is to wrap on Monday Feb 4 for public
announcement Thursday Feb 7.  If you're aware of any bug fixes you think
ought to get included, now's the time to get them done ...


I've just encountered another serious bug, which I wish to be fixed in the 
upcoming minor release.


I'm using streaming replication with PostgreSQL 9.1.6 on Linux (RHEL6.2, 
kernel 2.6.32).  But this problem should happen regardless of the use of 
streaming replication.


When I ran pg_ctl stop -mi against the primary, some applications 
connected to the primary did not stop.  The cause was that the backends was 
deadlocked in quickdie() with some call stack like the following.  I'm sorry 
to have left the stack trace file on the testing machine, so I'll show you 
the precise stack trace tomorrow.


some lock function
malloc()
gettext()
errhint()
quickdie()
signal handler called because of SIGQUIT
free()
...
PostgresMain()
...

The root cause is that gettext() is called in the signal handler quickdie() 
via errhint().  As you know, malloc() cannot be called in a signal handler:


http://www.gnu.org/software/libc/manual/html_node/Nonreentrancy.html#Nonreentrancy

[Excerpt]
On most systems, malloc and free are not reentrant, because they use a 
static data structure which records what memory blocks are free. As a 
result, no library functions that allocate or free memory are reentrant. 
This includes functions that allocate space to store a result.



And gettext() calls malloc(), as reported below:

http://lists.gnu.org/archive/html/bug-coreutils/2005-04/msg00056.html

I think the solution is the typical one.  That is, to just remember the 
receipt of SIGQUIT by setting a global variable and call siglongjmp() in 
quickdie(), and perform tasks currently done in quickdie() when sigsetjmp() 
returns in PostgresMain().


What do think about the solution?  Could you include the fix?  If it's okay 
and you want, I'll submit the patch.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-30 Thread Zoltán Böszörményi

2013-01-28 15:20 keltezéssel, Hari Babu írta:

On Saturday, January 19, 2013 11:23 AM Amit kapila wrote:

On Saturday, January 19, 2013 4:13 AM Boszormenyi Zoltan wrote:

Hi,

Unfortunately, I won't have time to do anything with my lock_timeout

patch

for about 3 weeks. Does anyone have a little spare time to test it on

Windows?

I shall try to do it, probably next week.
Others are also welcome to test the patch.

I have carried out some windows testing of the lock timeout patch.


Thanks very much. It means it didn't crash for you and
it waited the expected amount of time as well.


The extra tests which are carried out on the patch are attached in the mail.


The patch itself contained regression tests to run by itself
and compete with statement_timeout as well, although
it waits only 2 seconds instead of 60 as in your test.


Some observations on the patch:

1. Patch needs a rebase as it causing some rejections.


On a fresh GIT pull, it only caused a reject for the documentation
parts of the patch. No other rejects and no fuzz, only line shift
warnings were indicated by the patch. Anyway, a refreshed one
is attached.


2. regress check failed because the expected .out file is not updated
properly.


Which regress check failed? The .out file was updated in the patch
for prepared_xacts.sql where the regression tests for lock_timeout
were added. Or do you mean the one for the sql file you sent?

Thanks,
Zoltán Böszörményi


--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



2-lock_timeout-v28.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-30 Thread Zoltán Böszörményi

2013-01-30 15:29 keltezéssel, Zoltán Böszörményi írta:

2013-01-28 15:20 keltezéssel, Hari Babu írta:

On Saturday, January 19, 2013 11:23 AM Amit kapila wrote:

On Saturday, January 19, 2013 4:13 AM Boszormenyi Zoltan wrote:

Hi,

Unfortunately, I won't have time to do anything with my lock_timeout

patch

for about 3 weeks. Does anyone have a little spare time to test it on

Windows?

I shall try to do it, probably next week.
Others are also welcome to test the patch.

I have carried out some windows testing of the lock timeout patch.


Thanks very much. It means it didn't crash for you and
it waited the expected amount of time as well.

The extra tests which are carried out on the patch are attached in 
the mail.


The patch itself contained regression tests to run by itself
and compete with statement_timeout as well, although
it waits only 2 seconds instead of 60 as in your test.


Some observations on the patch:

1. Patch needs a rebase as it causing some rejections.


On a fresh GIT pull, it only caused a reject for the documentation
parts of the patch. No other rejects and no fuzz, only line shift
warnings were indicated by the patch. Anyway, a refreshed one
is attached.


2. regress check failed because the expected .out file is not updated
properly.


Which regress check failed? The .out file was updated in the patch
for prepared_xacts.sql where the regression tests for lock_timeout
were added. Or do you mean the one for the sql file you sent?


If the failed regression test is indeed the prepared_xacts.sql that is
in my patch, can you attach the regression.diff file after the failed
make check? Thanks.



Thanks,
Zoltán Böszörményi







--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Andres Freund
On 2013-01-30 14:58:24 +0100, Andres Freund wrote:
  So reducing vacuum_freeze_min_age not only helps minimize the
  writes that are needed when autovacuum needs to scan the entire
  heap, but also decreases the frequency of those full-table scans.
 
 But it increases the amount of pages that are written out multiple times
 because they contain tuples of different ages, in contrast to increasing
 vacuum_freeze_table_age which doesn't have that problem. In combination
 with full_page_writes that makes a noticeable different in total write
 volume.

Btw, as far as I read the code that behaviour only exists insofar that
the last time vacuum runs it freezes all tuples below freeze_min_age but
not newer ones, so relfrozenxid will only be set to current_xmin -
freeze_min_age. But if you manually freeze or no such old tuples exist
its solely influenced by freeze_table_age.

The relevant parts of the code are:

c.f.
vacuum_set_xid_limits:
/*
 * Determine the table freeze age to use: as specified by the 
caller,
 * or vacuum_freeze_table_age, but in any case not more than
 * autovacuum_freeze_max_age * 0.95, so that if you have e.g 
nightly
 * VACUUM schedule, the nightly VACUUM gets a chance to freeze 
tuples
 * before anti-wraparound autovacuum is launched.
 */
freezetable = freeze_min_age;
if (freezetable  0)
freezetable = vacuum_freeze_table_age;
freezetable = Min(freezetable, autovacuum_freeze_max_age * 
0.95);
Assert(freezetable = 0);

/*
 * Compute the cutoff XID, being careful not to generate a 
permanent
 * XID.
 */
limit = ReadNewTransactionId() - freezetable;
if (!TransactionIdIsNormal(limit))
limit = FirstNormalTransactionId;

*freezeTableLimit = limit;

lazy_vacuum_rel:
scan_all = TransactionIdPrecedesOrEquals(onerel-rd_rel-relfrozenxid,
 freezeTableLimit);

If youre careful you can also notice that there is an interesting typo
in the freeze table computation. Namely it uses freeze_min_age instead
of freeze_table_age. Which probably explains why I had so bad
performance results with lowering vacuum_freeze_min_age, it basically
radically increases the amount of full-table-scans, far more than it
should.

I can't imagine that anybody with a large database ran pg successfully
with a small freeze_min_age due to this.

It seems to be broken since the initial introduction of freeze_table_age
in 6587818542e79012276dcfedb2f97e3522ee5e9b. I guess it wasn't noticed
because the behaviour is only visible via autovacuum because a
user-issued VACUUM passes -1 as freeze_min_age.

Trivial patch attached.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 043b6e6..8747705 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -449,7 +449,7 @@ vacuum_set_xid_limits(int freeze_min_age,
 		 * VACUUM schedule, the nightly VACUUM gets a chance to freeze tuples
 		 * before anti-wraparound autovacuum is launched.
 		 */
-		freezetable = freeze_min_age;
+		freezetable = freeze_table_age;
 		if (freezetable  0)
 			freezetable = vacuum_freeze_table_age;
 		freezetable = Min(freezetable, autovacuum_freeze_max_age * 0.95);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump --pretty-print-views

2013-01-30 Thread Tom Lane
Marko Tiikkaja pgm...@joh.to writes:
 On 1/30/13 7:52 AM, Jeevan Chalke wrote:
 However, I am not sure about putting WRAP_COLUMN_DEFAULT by default. In
 my opinion we should put that by default but other people may object so I
 will keep that in code committors plate.

 I have no opinion on this to one way or the other, so I'm fine with 
 waiting for someone else (possibly the committer) to decide this.

FWIW, I'd vote for not enabling that by default --- it's basically an
unwarranted assumption about how wide people's terminal windows are.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-30 Thread Amit Kapila
On Tuesday, January 29, 2013 7:42 PM Amit Kapila wrote:
 On Tuesday, January 29, 2013 3:53 PM Heikki Linnakangas wrote:
  On 29.01.2013 11:58, Amit Kapila wrote:
   Can there be another way with which current patch code can be made
  better,
   so that we don't need to change the encoding approach, as I am
 having
   feeling that this might not be performance wise equally good.
 
  The point is that I don't want to heap_delta_encode() to know the
  internals of pglz compression. You could probably make my patch more
  like yours in behavior by also passing an array of offsets in the new
  tuple to check, and only checking for matches as those offsets.
 
 I think it makes sense, because if we have offsets of both new and old
 tuple, we
 can internally use memcmp to compare columns and use same algorithm for
 encoding.
 I will change the patch according to this suggestion.

I have modified the patch as per above suggestion.
Apart from passing new and old tuple offsets, I have passed bitmaplength
also, as we need
to copy the bitmap of new tuple as it is into Encoded WAL Tuple.

Please see if such API design is okay?

I shall update the README and send the performance/WAL Reduction data for
modified patch tomorrow.

With Regards,
Amit Kapila.



wal_update_changes_v10.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-30 Thread Hari Babu
On Wednesday, January 30, 2013 7:59 PM Zoltán Böszörményi wrote:
2013-01-28 15:20 keltezéssel, Hari Babu írta:
 2. regress check failed because the expected .out file is not 
 updated properly.

Which regress check failed? The .out file was updated in the patch for
prepared_xacts.sql where the regression tests for lock_timeout were added.
Or do you mean the one for the sql file you sent?

During regress test, prepared_xacts_1.out expected file used for comparing
with the result file. Which is not updated by the patch. Because of this
reason the regress check is failing.

Regards,
Hari babu.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-01-30 Thread Tom Lane
MauMau maumau...@gmail.com writes:
 When I ran pg_ctl stop -mi against the primary, some applications 
 connected to the primary did not stop. ...
 The root cause is that gettext() is called in the signal handler quickdie() 
 via errhint().

Yeah, it's a known hazard that quickdie() operates like that.

 I think the solution is the typical one.  That is, to just remember the 
 receipt of SIGQUIT by setting a global variable and call siglongjmp() in 
 quickdie(), and perform tasks currently done in quickdie() when sigsetjmp() 
 returns in PostgresMain().

I think this cure is considerably worse than the disease.  As stated,
it's not a fix at all: longjmp'ing out of a signal handler is no better
defined than what happens now, in fact it's probably even less safe.
We could just set a flag and wait for the mainline code to notice,
but that would make SIGQUIT hardly any stronger than SIGTERM --- in
particular it couldn't get you out of any loop that wasn't checking for
interrupts.

The long and the short of it is that SIGQUIT is the emergency-stop panic
button.  You don't use it for routine shutdowns --- you use it when
there is a damn good reason to and you're prepared to do some manual
cleanup if necessary.

http://en.wikipedia.org/wiki/Big_Red_Switch

 What do think about the solution?  Could you include the fix?

Even if we had an arguably-better solution, I'd be disinclined to
risk cramming it into stable branches on such short notice.

What might make sense on short notice is to strengthen the
documentation's cautions against using SIGQUIT unnecessarily.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-01-30 Thread Andres Freund
On 2013-01-30 10:23:09 -0500, Tom Lane wrote:
 MauMau maumau...@gmail.com writes:
  When I ran pg_ctl stop -mi against the primary, some applications 
  connected to the primary did not stop. ...
  The root cause is that gettext() is called in the signal handler quickdie() 
  via errhint().
 
 Yeah, it's a known hazard that quickdie() operates like that.

What about not translating those? The messages are static and all memory
needed by postgres should be pre-allocated.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Jeff Janes
On Mon, Jan 28, 2013 at 9:09 PM, Josh Berkus j...@agliodbs.com wrote:

 Let's do this by example.  TableA is a large table which receives an
 almost constant stream of individual row updates, inserts, and deletes.

 DEFAULTS:

 XID 1:  First rows in TableA are updated.
 XID 200m: Anti-wraparound autovac of TableA.
  All XIDs older than XID 100m set to FROZENXID.
 XID 300m: Anti-wraparound autovac of TableA
  All XIDs older than XID 200M set to FROZENXID.
 XID 400m: Anti-wraparound autovac of TableA
  All XIDs older than XID 300M set to FROZENXID.
 XID 500m: Anti-wraparound autovac of TableA
  All XIDs older than XID 400M set to FROZENXID.
 XID 600m: Anti-wraparound autovac of TableA
  All XIDs older than XID 500M set to FROZENXID.

You seem to be assuming the only reason for vacuums to occur on this
table is autovacuum_freeze_max_age, which doesn't seem likely to be
the case for a busy table.

 vacuum_freeze_min_age = 1m

 XID 1:  First rows in TableA are updated.
 XID 200m: Anti-wraparound autovac of TableA.
  All XIDs older than XID 199m set to FROZENXID.
 XID 399m: Anti-wraparound autovac of TableA
  All XIDs older than XID 398M set to FROZENXID.
 XID 598m: Anti-wraparound autovac of TableA
  All XIDs older than XID 597M set to FROZENXID.


 vacuum_freeze_min_age = 1m, autovacuum_freeze_max_age = 500m

 XID 1:  First rows in TableA are updated.
 XID 500m: Anti-wraparound autovac of TableA.
  All XIDs older than XID 499m set to FROZENXID.

 As you can see, the current default settings cause 80% more wraparound
 autovacs per table than vacuum_freeze_min_age of 1m would, and almost
 500% more than what I consider sane settings would.

You can get nearly all the benefits of your sane settings just by
increasing autovacuum_freeze_max_age and leaving vacuum_freeze_min_age
alone.  (Assuming the table doesn't get vacuumed for other reasons)


Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump --pretty-print-views

2013-01-30 Thread Andrew Dunstan


On 01/30/2013 09:58 AM, Tom Lane wrote:

Marko Tiikkaja pgm...@joh.to writes:

On 1/30/13 7:52 AM, Jeevan Chalke wrote:

However, I am not sure about putting WRAP_COLUMN_DEFAULT by default. In
my opinion we should put that by default but other people may object so I
will keep that in code committors plate.

I have no opinion on this to one way or the other, so I'm fine with
waiting for someone else (possibly the committer) to decide this.

FWIW, I'd vote for not enabling that by default --- it's basically an
unwarranted assumption about how wide people's terminal windows are.




I'm not exactly sure what you're arguing for.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-01-30 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-01-30 10:23:09 -0500, Tom Lane wrote:
 Yeah, it's a known hazard that quickdie() operates like that.

 What about not translating those? The messages are static and all memory
 needed by postgres should be pre-allocated.

That would reduce our exposure slightly, but hardly to zero.  For
instance, if SIGQUIT happened in the midst of handling a regular error,
ErrorContext might be pretty full already, necessitating further malloc
requests.  I thought myself about suggesting that quickdie do something
to disable gettext(), but it doesn't seem like that would make it enough
safer to justify the loss of user-friendliness for non English speakers.

I think the conflict between we don't want SIGQUIT to interrupt this
and we do want SIGQUIT to interrupt that is pretty fundamental, and
there's probably not any bulletproof solution (or at least none that
would have reasonable development/maintenance cost).  If we had more
confidence that there were no major loops lacking CHECK_FOR_INTERRUPTS
calls, maybe the set-a-flag approach would be acceptable ... but I
sure don't have such confidence.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Jeff Janes
On Wed, Jan 30, 2013 at 5:39 AM, Kevin Grittner kgri...@ymail.com wrote:
 Andres Freund and...@2ndquadrant.com wrote:

 Don't think I did. I was talking about vacuum_freeze_table_age
 because that influences the amount of full-table scans

 Not any more than vacuum_freeze_min_age does.

There is a lot more room for vacuum_freeze_table_age to be increased,
then there is for vacuum_freeze_min_age to be decreased.

 http://www.postgresql.org/docs/9.2/interactive/routine-vacuuming.html#AUTOVACUUM

 | a whole table sweep is forced if the table hasn't been fully
 | scanned for vacuum_freeze_table_age minus vacuum_freeze_min_age
 | transactions.

Why is vacuuming described as such a violent process?  It doesn't
force a table sweep, it just goes ahead and performs one.  In
general, it seems hard to tell from the docs that this only promotes a
vacuum which is going to happen anyway from a vm one to a full scan
one.  The forcefulness makes it sound more like it is doing vacuums
that wouldn't happen otherwise (like autovacuum_freeze_max_age does,
which actually could be described as force since it turns on the
autovac launcher even if it is configured to be off)

 So reducing vacuum_freeze_min_age not only helps minimize the
 writes that are needed when autovacuum needs to scan the entire
 heap,

How does it do that? If the tuple doesn't need to frozen now because
it was already frozen, that just means the write happened at a
different time but it still happened.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump --pretty-print-views

2013-01-30 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 01/30/2013 09:58 AM, Tom Lane wrote:
 Marko Tiikkaja pgm...@joh.to writes:
 On 1/30/13 7:52 AM, Jeevan Chalke wrote:
 However, I am not sure about putting WRAP_COLUMN_DEFAULT by default. In
 my opinion we should put that by default but other people may object so I
 will keep that in code committors plate.

 FWIW, I'd vote for not enabling that by default --- it's basically an
 unwarranted assumption about how wide people's terminal windows are.

 I'm not exactly sure what you're arguing for.

Maybe I'm confused - I thought the alternative Jeevan was talking about
was that we enable PRETTY_INDENT processing by default, but not
wrapColumn processing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-30 Thread Zoltán Böszörményi

2013-01-30 16:06 keltezéssel, Hari Babu írta:

On Wednesday, January 30, 2013 7:59 PM Zoltán Böszörményi wrote:

2013-01-28 15:20 keltezéssel, Hari Babu írta:
2. regress check failed because the expected .out file is not
updated properly.

Which regress check failed? The .out file was updated in the patch for

prepared_xacts.sql where the regression tests for lock_timeout were added.
Or do you mean the one for the sql file you sent?

During regress test, prepared_xacts_1.out expected file used for comparing
with the result file. Which is not updated by the patch. Because of this
reason the regress check is failing.


I see, so this is a Windows-only change that needs a different.
Can you send the resulting prepared_xacts_1.out file so I can integrate
its changes into my patch? That way it would be complete.
Thanks in advance.



Regards,
Hari babu.






--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Event Triggers Patch Series

2013-01-30 Thread Dimitri Fontaine
Hi,

As the discussion about Event Triggers seems to be confusing at times
for people trying to follow at home, I've put together a wiki page that
tries as making things clear about the use cases we want to solve and
how, and about what has been commited already and the remaining work to
accomplish this CF or after:

  https://wiki.postgresql.org/wiki/Event_Triggers

If you've been following closely, please review that page to be sure you
agree with my summary here, and if you've not been following as close as
you did want to, we have a great challenge for you once you've read it.

So, now that you've read the wiki page, the challenge is to devise a
reduced feature set that still forms a complete feature. That means that
the feature set must allow us to solve the use cases we are targetting.

My list is:

  - expose information to User Defined Function in Event Triggers
  - expose a Normalized Command String
  - expose Generated commands for Logical Replication consumers

And Robert list begins with:

  - expose any object that we DROP, because of CASCADE or in a list of
multiple objects in a single command, or alone in its own command

So, this DROP concern should be solved with the patch I submitted
earlier, and should open the door to finish implementing the other
point.

Implementing here for me basically means creating new branches and
cherry picking existing and already reviewed (partly or fully) code to
make it a separate patch for review and apply.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Trigger

2013-01-30 Thread Dimitri Fontaine
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 So please find attached to this email an implementation of the sql_drop
 event trigger, that refrains on exposing any new information to the
 users.

Already a v1 of that patch, per comments from Álvaro I reuse the
ObjectAddresses facility rather than building my own List of object
addresses.

Note that with that in place it's now easy to also add support for the
DROP OWNED BY command, but that's left for a future patch as I expect
some amount of discussion to go about it.

Also, I removed the code that was doing de-deduplication of the object
addresses we collect, now trusting performMultipleDeletions() not to
screw us up. There's a use case that needs particular attention here,
though:

DROP TABLE foo, foo;

I'm not sure we want to deduplicate foo in the pg_dropped_objects()
output in that case, so I've not done so in this version of the patch.
Also, Álvaro is concerned that the cost of deduplicating might be higher
than what we want to take here.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



sql_drop.1.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lazy_vacuum_heap()'s removal of HEAPTUPLE_DEAD tuples

2013-01-30 Thread Pavan Deolasee
On Wed, Jan 30, 2013 at 7:34 AM, Noah Misch n...@leadboat.com wrote:
 On Mon, Jan 28, 2013 at 07:24:04PM +0530, Pavan Deolasee wrote:
 On Wed, Jan 23, 2013 at 10:05 AM, Noah Misch n...@leadboat.com wrote:

  You're the second commentator to be skittish about the patch's 
  correctness, so
  I won't argue against a conservatism-motivated bounce of the patch.

 Can you please rebase the patch against the latest head ? I see
 Alvaro's and Simon's recent changes has bit-rotten the patch.

 Attached.

Thanks. Here are a few comments.

1. I saw you took care of the bug that I reported in the first email
by allowing overwriting a LP_DEAD itemid during recovery. While this
should be OK given that we had never seen reports of recovery trying
to accidentally overwrite a LP_DEAD itemid, are you sure after this
change we can't get into this situation post reachedConsistency ? If
we ever do, I think the recovery would just fail.

2. In lazy_scan_heap() after detecting a DEAD tuple you now try to
confirm that the tuple must not require a freeze. Is that really safe
? I think this assumes that the HOT prune would have already truncated
*every* DEAD tuple to LP_DEAD except an INSERT_IN_PROGRESS which may
have turned into DEAD between two calls to HeapTupleSatisfiesVacuum().
While this might be true, but we never tried hard to prove that before
because it wasn't necessary. I remember Heikki raising this concern
when I proposed setting the VM bit after HOT prune. So a proof of that
would probably help.

3. Converting LP_DEAD to LP_UNUSED in lazy_vacuum_page() while holding
just a SHARE lock seems to be OK, but is a bit adventurous. I would
rather just get an EX lock and do it. Also, its probably more
appropriate to just mark the buffer dirty instead of
SetBufferCommitInfoNeedsSave(). It may cause line pointer bloat and
vacuum may not come to process this page again. This will also be kind
of unnecessary after the patch to set VM bit in the second phase gets
in.

4. Are changes in the variable names and logic around them in
lazy_scan_heap() really required ? Just makes me a bit uncomfortable.
See if we can minimize those changes or do it in a separate patch, if
possible.

I haven't run tests with the patch yet. Will see if I can try a few. I
share other's views on making these changes late in the cycle, but if
we can reduce the foot-print of the patch, we should be OK.

I see the following (and similar) messages while applying the patch,
but may be they are harmless.

(Stripping trailing CRs from patch.)

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 I can't imagine that anybody with a large database ran pg
 successfully with a small freeze_min_age due to this.

I can't speak to this from personal experience, because at
Wisconsin Courts we found ourselves best served by running a
database VACUUM FREEZE ANALYZE each night during off-peak hours.

 It seems to be broken since the initial introduction of
 freeze_table_age in 6587818542e79012276dcfedb2f97e3522ee5e9b.

 Trivial patch attached.

I didn't see a patch attached.

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Andres Freund
On 2013-01-30 10:21:07 -0800, Kevin Grittner wrote:
  It seems to be broken since the initial introduction of
  freeze_table_age in 6587818542e79012276dcfedb2f97e3522ee5e9b.
 
  Trivial patch attached.
 
 I didn't see a patch attached.

The archive has it, so I for once haven't forgotten sending it:
http://www.postgresql.org/message-id/20130130145521.gb3...@awork2.anarazel.de 

While its a clear bug and fixing it in HEAD is a sure thing an argument
could be made that its a clear behavioural change in the back
branches. I don't think it holds too much water, but wanted to mention
it.

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:
 To: Andres Freund and...@2ndquadrant.com

 Trivial patch attached.

 I didn't see a patch attached.

Never mind; I was looking in the wrong spot.  (I just switched
email providers again because the last one couldn't seem to get the
email headers right for threading purposes.)

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Kevin Grittner
Jeff Janes jeff.ja...@gmail.com wrote:
 Kevin Grittner kgri...@ymail.com wrote:

 So reducing vacuum_freeze_min_age not only helps minimize the
 writes that are needed when autovacuum needs to scan the entire
 heap,

 How does it do that? If the tuple doesn't need to frozen now
 because it was already frozen, that just means the write happened
 at a different time but it still happened.

There's a lot to be said for nibbling away at it during VM
autovacuum runs versus doing it in big chunks in heap-scan runs,
particularly if your BBU cache can absorb up to a certain amout
with no real pain, but causes major pain if the write cache fills.

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hm, table constraints aren't so unique as all that

2013-01-30 Thread Jim Nasby

On 1/28/13 6:25 PM, Tom Lane wrote:

I think we need to tighten this down by having index-constraint creation
check for conflicts with other constraint types.  It also seems like it
might be a good idea to put in a unique index to enforce the intended
lack of conflicts --- note that the existing index on (conname,
connamespace) isn't unique.  It's a bit problematic that pg_constraint
contains both table-related constraints and domain-related constraints,
but it strikes me that we could get close enough by changing
pg_constraint_conname_nsp_index to be a unique index on
(conname, connamespace, conrelid, contypid).  That would fix the problem
as long as no pg_constraint entry ever has both conrelid and contypid
nonzero; the unique index couldn't catch such an error.  But it doesn't
seem to me that such a coding error would escape detection anyway.


My belt-and-suspenders mind tells me that there should be a check 
constraint enforcing that either conrelid IS NOT NULL XOR contypid IS 
NOT NULL. We routinely do this at work.


Dunno if putting check constraints on catalog tables is possible/sane 
though...



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Jim Nasby

On 1/25/13 11:56 AM, Christopher Browne wrote:

With a little bit of noodling around, here's a thought for a joint function
that I*think*  has reasonably common scales:

f(deadtuples, relpages, age) =
deadtuples/relpages + e ^ (age*ln(relpages)/2^32)


Be careful with dead/relpages, because dead tuples increase relpages as 
well. The effect is extremely noticeable on frequently hit tables that 
need to be kept small. If you want to have a deadtuples/size metric, I 
think it would be far better to do deadtuples/non_bloated_table_size.


Someone else in the thread mentioned that what we really need to be 
watching aren't raw values, but trends. Or you can think of it as 
watching first (or even second) derivatives if you like. I couldn't 
agree more. I believe there are several parts of Postgres that end up 
with a bunch of hard to tune GUCs specifically because we're measuring 
the wrong things.


Take freezing for example. Since the only reason to freeze is XID wrap 
then the *ideal* time to start a freeze vacuum on a table is so that the 
vacuum would end *exactly* as we were about to hit XID wrap.


Obviously that's a completely impractical goal to hit, but notice the 
simplicity of the goal: we only care about the vacuum ending right 
before we'd hit XID wrap. The only way to do that is to monitor how fast 
vacuums are running, how fast XIDs are being consumed, and how quickly 
the oldest XID in each table is advancing. Notice that all of those 
measurements are time derivatives.


From a more practical standpoint, I think it would be extremely useful 
to have a metric that showed how quickly a table churned. Something like 
dead tuples per time period. Comparing that to the non-bloated table 
size should give a very strong indication of how critical frequent 
vacuums on that table are.


I don't have a good metric in mind for freeze right now, but I do want 
to mention a use case that I don't think has come up before. When 
building a londiste slave (and presumably all the other trigger 
replication systems suffer from this), each table is copied over in a 
single transaction, and then updates start flowing in for that table. 
That can easily result in a scenario where you have an enormous volume 
of tuples that will all need freezing at almost exactly the same time. 
It would be nice if we could detect such a condition and freeze those 
tuples over time, instead of trying to freeze all of them in one shot.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hm, table constraints aren't so unique as all that

2013-01-30 Thread Jim Nasby

On 1/29/13 6:40 PM, Tom Lane wrote:

I wrote:

Over in the thread about enhanced error fields, I claimed that
constraints are uniquely named among those associated with a table,
or with a domain.  But it turns out that that ain't necessarily so,
because the code path for index constraints doesn't pay any attention
to pre-existing check constraints: ...
I think we need to tighten this down by having index-constraint creation
check for conflicts with other constraint types.  It also seems like it
might be a good idea to put in a unique index to enforce the intended
lack of conflicts --- note that the existing index on (conname,
connamespace) isn't unique.  It's a bit problematic that pg_constraint
contains both table-related constraints and domain-related constraints,
but it strikes me that we could get close enough by changing
pg_constraint_conname_nsp_index to be a unique index on
(conname, connamespace, conrelid, contypid).

I experimented with changing pg_constraint's index that way.  It doesn't
seem to break anything, but it turns out not to fix the problem
completely either, because if you use CREATE INDEX syntax to create an
index then no pg_constraint entry is made at all.  So it's still
possible to have an index with the same name as some non-index
constraint on the same table.

If we wanted to pursue this, we could think about decreeing that every
index must have a pg_constraint entry.  That would have some attraction
from the standpoint of catalog-entry uniformity, but there are
considerable practical problems in the way as well.  Notably, what would
we do for the conkey field in pg_constraint for an expression index?
(Failing to set that up as expected might well break client-side code.)
Also, I think we'd end up with the pg_depend entry between the index and
the constraint pointing in opposite directions depending on whether the
index was made using CONSTRAINT syntax or CREATE INDEX syntax.  There's
some precedent for that with the linkage between pg_class entries and
their pg_type rowtype entries, but that's a mess that I'd rather not
replicate.

Or we could leave the catalogs alone and just add more pre-creation
checking for conflicts.  That doesn't seem very bulletproof though
because of possible race conditions.  I think that right now it'd
be safe enough because of the table-level locks taken by ALTER TABLE
and CREATE INDEX --- but if the project to reduce ALTER TABLE's locking
level ever gets resurrected, we'd be at serious risk of introducing
a problem there.

Or on the third hand, we could just say it's okay if there are conflicts
between index names and check-constraint names.  Any given SQLSTATE
would only be mentioning one of these types of constraints, so it's
arguable that there's not going to be any real ambiguity in practice.

At the moment I'm inclined to leave well enough alone.  Thoughts?


ISTM that we shouldn't blindly equate indexes and constraints. I'd 
actually argue they should in no way be related, except that I've found 
it to be extremely useful to create unique indexes that cover scenarios 
that you can't handle with an actual unique constraint (ie: 
UNIQUE(field_a, field_B) WHERE field_c IS NULL).


Perhaps a good compromise would be that only unique indexes get entries 
in pg_constraint.


Something else worth mentioning is that hopefully we'll eventually have 
indexes (including unique) that can span multiple tables.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plpgsql versus SPI plan abstraction

2013-01-30 Thread Tom Lane
I looked into the odd behavior noted recently on pgsql-novice that
the error context stack reported by plpgsql could differ between
first and subsequent occurrences of the same error:
http://www.postgresql.org/message-id/26370.1358539...@sss.pgh.pa.us

This seems to be specific to errors that are detected at plan time for a
potentially-simple expression.  The example uses 1/0 which throws an
error when eval_const_expressions tries to simplify it.  From plpgsql's
viewpoint, the error happens when it tries to use GetCachedPlan() to get
a plan tree that it can check for simple-ness.  In this situation, we
have not pushed _SPI_error_callback onto the error context stack, so the
line it might contribute to the context report doesn't show up.
However, exec_simple_check_plan is set up to mark the PLpgSQL_expr as
non-simple at the outset, so that when it loses control due to the
error, that's how the already-cached PLpgSQL_expr is marked.  Thus, on a
subsequent execution, we don't go through there but just pass off
control to SPI_execute_plan --- and it *does* set up _SPI_error_callback
before calling GetCachedPlan().  So now you get the additional line of
context.

There doesn't seem to be a comparable failure mode before 9.2, because
in previous releases planning would always occur before we created a
CachedPlanSource at all; so the failure would leave plpgsql still
without a cached PLpgSQL_expr, and the behavior would be consistent
across tries.

My first thought about fixing this was to export _SPI_error_callback
so that plpgsql could push it onto the context stack before doing
GetCachedPlan.  But that's just another piercing of the veil of
modularity.  What seems like a better solution is to export a SPI
wrapper of GetCachedPlan() that pushes the callback locally.  With a
bit more work (a wrapper to get the CachedPlanSource list) we could
also stop letting pl_exec.c #include spi_priv.h, which is surely a
modularity disaster from the outset.

Does anyone see a problem with back-patching such a fix into 9.2,
so as to get rid of the context stack instability there?

BTW, I'm also wondering if it's really necessary for plpython/plpy_spi.c
to be looking into spi_priv.h ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Kevin Grittner
Jim Nasby j...@nasby.net wrote:

 the only reason to freeze is XID wrap

The conclusions you draw seem to be based on a slightly different
premise than stated here; the conclusions are only supported by the
assumption that the only reason to freeze at any particular moment
is to assure that all tuples are frozen in time to prevent
transaction wraparound.  In a mostly write-only database, that has
to happen for nearly every tuple at some point -- but *which* point
is best is not necessarily as late as possible.

 then the *ideal* time to start a freeze vacuum on a table is so
 that the vacuum would end *exactly* as we were about to hit XID
 wrap.

For a tuple which you know is going to survive long enough to be
frozen, the *ideal* time to freeze a tuple, from a performance
standpoint, is when the tuple is written, to eliminate a separate
WAL-log operation.  The *next best* time to freeze is when the hint
bits are set, to avoid a separate page write.  If you are doing
differential backups, the *third best* time to freeze is before the
first differential backup of the tuple, to avoid a separate backup
after the freeze.  And so on.

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql versus SPI plan abstraction

2013-01-30 Thread Pavel Stehule
2013/1/30 Tom Lane t...@sss.pgh.pa.us:
 I looked into the odd behavior noted recently on pgsql-novice that
 the error context stack reported by plpgsql could differ between
 first and subsequent occurrences of the same error:
 http://www.postgresql.org/message-id/26370.1358539...@sss.pgh.pa.us

 This seems to be specific to errors that are detected at plan time for a
 potentially-simple expression.  The example uses 1/0 which throws an
 error when eval_const_expressions tries to simplify it.  From plpgsql's
 viewpoint, the error happens when it tries to use GetCachedPlan() to get
 a plan tree that it can check for simple-ness.  In this situation, we
 have not pushed _SPI_error_callback onto the error context stack, so the
 line it might contribute to the context report doesn't show up.
 However, exec_simple_check_plan is set up to mark the PLpgSQL_expr as
 non-simple at the outset, so that when it loses control due to the
 error, that's how the already-cached PLpgSQL_expr is marked.  Thus, on a
 subsequent execution, we don't go through there but just pass off
 control to SPI_execute_plan --- and it *does* set up _SPI_error_callback
 before calling GetCachedPlan().  So now you get the additional line of
 context.

 There doesn't seem to be a comparable failure mode before 9.2, because
 in previous releases planning would always occur before we created a
 CachedPlanSource at all; so the failure would leave plpgsql still
 without a cached PLpgSQL_expr, and the behavior would be consistent
 across tries.

 My first thought about fixing this was to export _SPI_error_callback
 so that plpgsql could push it onto the context stack before doing
 GetCachedPlan.  But that's just another piercing of the veil of
 modularity.  What seems like a better solution is to export a SPI
 wrapper of GetCachedPlan() that pushes the callback locally.  With a
 bit more work (a wrapper to get the CachedPlanSource list) we could
 also stop letting pl_exec.c #include spi_priv.h, which is surely a
 modularity disaster from the outset.

 Does anyone see a problem with back-patching such a fix into 9.2,
 so as to get rid of the context stack instability there?


this is clean bug, so please, back-patch it in 9.2.

Regards

Pavel

 BTW, I'm also wondering if it's really necessary for plpython/plpy_spi.c
 to be looking into spi_priv.h ...

 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-30 Thread Bruce Momjian
On Wed, Jan 30, 2013 at 04:07:45PM +1100, Josh Berkus wrote:
 
  I don't think I like --force because it isn't clear if we are forcing
  the start to have done something, or forcing the server to be running.
  
  Do we need this idempotent feature for stop too?
 
 Yes, of course.

If idempotent only affects -w (we don't wait for the return code without
-w), can we make -W to be idempotent?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql versus SPI plan abstraction

2013-01-30 Thread Jan Urbański

On 30/01/13 22:23, Tom Lane wrote:

BTW, I'm also wondering if it's really necessary for plpython/plpy_spi.c
to be looking into spi_priv.h ...


As far as I can tell, it's not necessary, spi.h would be perfectly fine.

Cheers,
Jan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Christopher Browne
On Wed, Jan 30, 2013 at 4:05 PM, Jim Nasby j...@nasby.net wrote:
 On 1/25/13 11:56 AM, Christopher Browne wrote:

 With a little bit of noodling around, here's a thought for a joint
 function
 that I*think*  has reasonably common scales:


 f(deadtuples, relpages, age) =
 deadtuples/relpages + e ^ (age*ln(relpages)/2^32)


 Be careful with dead/relpages, because dead tuples increase relpages as
 well. The effect is extremely noticeable on frequently hit tables that need
 to be kept small. If you want to have a deadtuples/size metric, I think it
 would be far better to do deadtuples/non_bloated_table_size.

You may well be right about that; I didn't try to get to second order effects,
just to come up with something that seemed reasonably readily computable
based on existing data that seemed relevant.

One approach to things is to ask if we can tweak that model to be materially
better without requiring extra data collection.  It seems plausible to do so; we
might get something better by deducting dead tuples.  I'll not try to present
this yet; I think it's a better idea to come up with a broadly fuller model.

 Someone else in the thread mentioned that what we really need to be watching
 aren't raw values, but trends. Or you can think of it as watching first (or
 even second) derivatives if you like. I couldn't agree more. I believe there
 are several parts of Postgres that end up with a bunch of hard to tune GUCs
 specifically because we're measuring the wrong things.

My first order reaction is that this sounds great.  And I reckon it would be
a great idea to come up with a reasonably sophisticated model that covers
all of the behaviours that we see emerging.

Mind you, I'm not sure that we can actually *use* that model; it may require
collecting enough more data that it becomes burdensome to collect the
additional data.

 Take freezing for example. Since the only reason to freeze is XID wrap then
 the *ideal* time to start a freeze vacuum on a table is so that the vacuum
 would end *exactly* as we were about to hit XID wrap.

 Obviously that's a completely impractical goal to hit, but notice the
 simplicity of the goal: we only care about the vacuum ending right before
 we'd hit XID wrap. The only way to do that is to monitor how fast vacuums
 are running, how fast XIDs are being consumed, and how quickly the oldest
 XID in each table is advancing. Notice that all of those measurements are
 time derivatives.

Yep, and I think some second derivatives will prove useful too.

The function that I presented took a slightly different tack to things; the
exponential term would try to make sure that tables get frozen WELL in
advance of necessity.  That strikes me as being safer in cases where
a database is blowing through XIDs mighty quickly.

 From a more practical standpoint, I think it would be extremely useful to
 have a metric that showed how quickly a table churned. Something like dead
 tuples per time period. Comparing that to the non-bloated table size should
 give a very strong indication of how critical frequent vacuums on that table
 are.

That's a nice idea, though I'm pretty sure that this requires capturing and
storing a lot more statistical data, which presumably worsens things
(presuming the data gets stored in tables), making the database bigger,
adding an extra set of data that needs to get collected, queried, and
trimmed, and chewing through still more XIDs.

If you could demonstrate pretty forcibly improved behaviours falling
out of that, that may make it salable.  There's quite a bit of work
there, and we would need to accept a burden of still more stats
collection.

 I don't have a good metric in mind for freeze right now, but I do want to
 mention a use case that I don't think has come up before. When building a
 londiste slave (and presumably all the other trigger replication systems
 suffer from this), each table is copied over in a single transaction, and
 then updates start flowing in for that table. That can easily result in a
 scenario where you have an enormous volume of tuples that will all need
 freezing at almost exactly the same time. It would be nice if we could
 detect such a condition and freeze those tuples over time, instead of trying
 to freeze all of them in one shot.

I wonder if these are perhaps good candidates for being frozen immediately;
COPY FREEZE was added in not so long ago; it doesn't perfectly cover
this, but if I squint at it a bit...

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8de72b66a2edcf12c812de0a73bd50b6b7d81d62

I don't see a lot of value to having the data COPYed over to a subscriber
NOT getting frozen immediately.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-01-30 Thread MauMau

From: Tom Lane t...@sss.pgh.pa.us

MauMau maumau...@gmail.com writes:

I think the solution is the typical one.  That is, to just remember the
receipt of SIGQUIT by setting a global variable and call siglongjmp() in
quickdie(), and perform tasks currently done in quickdie() when 
sigsetjmp()

returns in PostgresMain().


I think this cure is considerably worse than the disease.  As stated,
it's not a fix at all: longjmp'ing out of a signal handler is no better
defined than what happens now, in fact it's probably even less safe.
We could just set a flag and wait for the mainline code to notice,
but that would make SIGQUIT hardly any stronger than SIGTERM --- in
particular it couldn't get you out of any loop that wasn't checking for
interrupts.


Oh, I was careless.  You are right, my suggestion is not a fix at all 
because free() would continue to hold some lock after siglongjmp(), which 
malloc() tries to acquire.



The long and the short of it is that SIGQUIT is the emergency-stop panic
button.  You don't use it for routine shutdowns --- you use it when
there is a damn good reason to and you're prepared to do some manual
cleanup if necessary.

http://en.wikipedia.org/wiki/Big_Red_Switch


How about the case where some backend crashes due to a bug of PostgreSQL? 
In this case, postmaster sends SIGQUIT to all backends, too.  The instance 
is expected to disappear cleanly and quickly.  Doesn't the hanging backend 
harm the restart of the instance?


How about using SIGKILL instead of SIGQUIT?  The purpose of SIGQUIT is to 
shutdown the processes quickly.  SIGKILL is the best signal for that 
purpose.  The WARNING message would not be sent to clients, but that does 
not justify the inability of immediately shutting down.


Regards
MauMau




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Jim Nasby

On 1/30/13 3:28 PM, Kevin Grittner wrote:

Jim Nasby j...@nasby.net wrote:



then the *ideal* time to start a freeze vacuum on a table is so
that the vacuum would end *exactly* as we were about to hit XID
wrap.


For a tuple which you know is going to survive long enough to be
frozen, the *ideal* time to freeze a tuple, from a performance
standpoint, is when the tuple is written, to eliminate a separate
WAL-log operation.  The *next best* time to freeze is when the hint
bits are set, to avoid a separate page write.  If you are doing
differential backups, the *third best* time to freeze is before the
first differential backup of the tuple, to avoid a separate backup
after the freeze.  And so on.


Yeah, I hadn't considered that aspect, but I think that's a separate 
discussion. I agree that there are many cases where we would benefit 
from freezing early, and those should all help reduce the impact of 
XID-wrap induced freezing.


Even if we do introduce up-front freezing, we still need to deal with 
all the other cases though, which in the worst case means avoiding the 
XID wrap. I believe that particular case (as well as the whole autovac 
priority question) would be best served by looking at 1st and 2nd order 
derivatives.




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Jim Nasby

On 1/30/13 4:37 PM, Christopher Browne wrote:

 From a more practical standpoint, I think it would be extremely useful to
have a metric that showed how quickly a table churned. Something like dead
tuples per time period. Comparing that to the non-bloated table size should
give a very strong indication of how critical frequent vacuums on that table
are.

That's a nice idea, though I'm pretty sure that this requires capturing and
storing a lot more statistical data, which presumably worsens things
(presuming the data gets stored in tables), making the database bigger,
adding an extra set of data that needs to get collected, queried, and
trimmed, and chewing through still more XIDs.


We don't necessarily need added storage (or at least nothing serious). 
It could potentially be as simple as remembering when we last took a 
measurement and using that to also store number of blahs per second. 
That's only 2 added data points (time and the rate). Of course, there 
are pitfalls to something that simple, though some of those pitfals have 
simple solutions (such as using estimated weighted means).


In this case I think we would be better served by trying to define a 
good metric first, and then worrying about how to actually capture and 
store it.



I don't have a good metric in mind for freeze right now, but I do want to
mention a use case that I don't think has come up before. When building a
londiste slave (and presumably all the other trigger replication systems
suffer from this), each table is copied over in a single transaction, and
then updates start flowing in for that table. That can easily result in a
scenario where you have an enormous volume of tuples that will all need
freezing at almost exactly the same time. It would be nice if we could
detect such a condition and freeze those tuples over time, instead of trying
to freeze all of them in one shot.

I wonder if these are perhaps good candidates for being frozen immediately;
COPY FREEZE was added in not so long ago; it doesn't perfectly cover
this, but if I squint at it a bit...

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8de72b66a2edcf12c812de0a73bd50b6b7d81d62

I don't see a lot of value to having the data COPYed over to a subscriber
NOT getting frozen immediately.


Yeah, COPY FREEZE would probably be highly valuable in this case.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-01-30 Thread Tom Lane
MauMau maumau...@gmail.com writes:
 From: Tom Lane t...@sss.pgh.pa.us
 The long and the short of it is that SIGQUIT is the emergency-stop panic
 button.  You don't use it for routine shutdowns --- you use it when
 there is a damn good reason to and you're prepared to do some manual
 cleanup if necessary.

 How about the case where some backend crashes due to a bug of PostgreSQL? 
 In this case, postmaster sends SIGQUIT to all backends, too.  The instance 
 is expected to disappear cleanly and quickly.  Doesn't the hanging backend 
 harm the restart of the instance?

[ shrug... ]  That isn't guaranteed, and never has been --- for
instance, the process might have SIGQUIT blocked, perhaps as a result
of third-party code we have no control over.

 How about using SIGKILL instead of SIGQUIT?

Because then we couldn't notify clients at all.  One practical
disadvantage of that is that it would become quite hard to tell from
the outside which client session actually crashed, which is frequently
useful to know.

This isn't an area that admits of quick-fix solutions --- everything
we might do has disadvantages.  Also, the lack of complaints to date
shows that the problem is not so large as to justify panic responses.
I'm not really inclined to mess around with a tradeoff that's been
working pretty well for a dozen years or more.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] backend hangs at immediate shutdown

2013-01-30 Thread Tatsuo Ishii
 This isn't an area that admits of quick-fix solutions --- everything
 we might do has disadvantages.  Also, the lack of complaints to date
 shows that the problem is not so large as to justify panic responses.
 I'm not really inclined to mess around with a tradeoff that's been
 working pretty well for a dozen years or more.

What about adding a caution to the doc something like:

 pg_ctl -m -i stop may cause a PostgreSQL hang if native laguage 
support enabled.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] backend hangs at immediate shutdown

2013-01-30 Thread Andres Freund
On 2013-01-31 08:27:13 +0900, Tatsuo Ishii wrote:
  This isn't an area that admits of quick-fix solutions --- everything
  we might do has disadvantages.  Also, the lack of complaints to date
  shows that the problem is not so large as to justify panic responses.
  I'm not really inclined to mess around with a tradeoff that's been
  working pretty well for a dozen years or more.
 
 What about adding a caution to the doc something like:
 
pg_ctl -m -i stop may cause a PostgreSQL hang if native laguage 
 support enabled.

That doesn't entirely solve the problem, see quote and reply in
6845.1359561...@sss.pgh.pa.us

I think adding errmsg_raw() or somesuch that doesn't allocate any memory
and only accepts constant strings could solve the problem more
completely, at the obvious price of not allowing translated strings
directly.
Those could be pretranslated during startup, but thats mighty ugly.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] backend hangs at immediate shutdown

2013-01-30 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-01-31 08:27:13 +0900, Tatsuo Ishii wrote:
 What about adding a caution to the doc something like:
 pg_ctl -m -i stop may cause a PostgreSQL hang if native laguage support 
 enabled.

 That doesn't entirely solve the problem, see quote and reply in
 6845.1359561...@sss.pgh.pa.us

 I think adding errmsg_raw() or somesuch that doesn't allocate any memory
 and only accepts constant strings could solve the problem more
 completely, at the obvious price of not allowing translated strings
 directly.

I really doubt that this would make a measurable difference in the
probability of failure.  The OP's case looks like it might not have
occurred if we weren't translating, but (a) that's not actually proven,
and (b) there are any number of other, equally low-probability, reasons
to have a problem here.  Please note for instance that elog.c would
still be doing a whole lot of palloc's even if the passed strings were
not copied.

I think if we want to make it bulletproof we'd have to do what the
OP suggested and switch to SIGKILL.  I'm not enamored of that for the
reasons I mentioned --- but one idea that might dodge the disadvantages
is to have the postmaster wait a few seconds and then SIGKILL any
backends that hadn't exited.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] backend hangs at immediate shutdown

2013-01-30 Thread Tatsuo Ishii
 What about adding a caution to the doc something like:
 
   pg_ctl -m -i stop may cause a PostgreSQL hang if native laguage 
 support enabled.
 
 That doesn't entirely solve the problem, see quote and reply in
 6845.1359561...@sss.pgh.pa.us

Oh, I see now.

 I think adding errmsg_raw() or somesuch that doesn't allocate any memory
 and only accepts constant strings could solve the problem more
 completely, at the obvious price of not allowing translated strings
 directly.
 Those could be pretranslated during startup, but thats mighty ugly.

Are you suggesting to call errmsg_raw() instead of errmsg() in quickdie()?

Tom said:
 That would reduce our exposure slightly, but hardly to zero.  For
 instance, if SIGQUIT happened in the midst of handling a regular error,
 ErrorContext might be pretty full already, necessitating further malloc
 requests.

If I understand this correctly, I don't think errmsg_raw()
solves the particular problem.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump --pretty-print-views

2013-01-30 Thread Andrew Dunstan


On 01/30/2013 11:03 AM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

On 01/30/2013 09:58 AM, Tom Lane wrote:

Marko Tiikkaja pgm...@joh.to writes:

On 1/30/13 7:52 AM, Jeevan Chalke wrote:

However, I am not sure about putting WRAP_COLUMN_DEFAULT by default. In
my opinion we should put that by default but other people may object so I
will keep that in code committors plate.

FWIW, I'd vote for not enabling that by default --- it's basically an
unwarranted assumption about how wide people's terminal windows are.

I'm not exactly sure what you're arguing for.

Maybe I'm confused - I thought the alternative Jeevan was talking about
was that we enable PRETTY_INDENT processing by default, but not
wrapColumn processing.





Well, we could actually set the wrap value to 0, which would mean always 
wrap. That wouldn't be making any assumption about the user's terminal 
window size ;-)


Here are two examples, one of which is with exactly Marko's patch which 
doesn't change the wrap value (i.e. leaves it at -1, meaning no wrap) , 
the other sets it at 0. The example is a view that is the same as 
information_schema.columns.


Personally I find the wrapped case MUCH more readable. I guess anything 
is an advance, but turning on PRETTY_INDENT without turning on some 
level of target wrapping seems like a half-done job.


cheers

andrew
--
-- PostgreSQL database dump
--

SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;

--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner: 
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;


--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: 
--

COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';


SET search_path = public, pg_catalog;

--
-- Name: mycols; Type: VIEW; Schema: public; Owner: andrew
--

CREATE VIEW mycols AS
 SELECT (current_database())::information_schema.sql_identifier AS table_catalog, (nc.nspname)::information_schema.sql_identifier AS table_schema, (c.relname)::information_schema.sql_identifier AS table_name, (a.attname)::information_schema.sql_identifier AS column_name, (a.attnum)::information_schema.cardinal_number AS ordinal_position, (pg_get_expr(ad.adbin, ad.adrelid))::information_schema.character_data AS column_default, (
CASE
WHEN (a.attnotnull OR ((t.typtype = 'd'::char) AND t.typnotnull)) THEN 'NO'::text
ELSE 'YES'::text
END)::information_schema.yes_or_no AS is_nullable, (
CASE
WHEN (t.typtype = 'd'::char) THEN 
CASE
WHEN ((bt.typelem  (0)::oid) AND (bt.typlen = (-1))) THEN 'ARRAY'::text
WHEN (nbt.nspname = 'pg_catalog'::name) THEN format_type(t.typbasetype, NULL::integer)
ELSE 'USER-DEFINED'::text
END
ELSE 
CASE
WHEN ((t.typelem  (0)::oid) AND (t.typlen = (-1))) THEN 'ARRAY'::text
WHEN (nt.nspname = 'pg_catalog'::name) THEN format_type(a.atttypid, NULL::integer)
ELSE 'USER-DEFINED'::text
END
END)::information_schema.character_data AS data_type, (information_schema._pg_char_max_length(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.cardinal_number AS character_maximum_length, (information_schema._pg_char_octet_length(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.cardinal_number AS character_octet_length, (information_schema._pg_numeric_precision(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.cardinal_number AS numeric_precision, (information_schema._pg_numeric_precision_radix(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.cardinal_number AS numeric_precision_radix, (information_schema._pg_numeric_scale(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.cardinal_number AS numeric_scale, (information_schema._pg_datetime_precision(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.cardinal_number AS datetime_precision, (information_schema._pg_interval_type(information_schema._pg_truetypid(a.*, t.*), information_schema._pg_truetypmod(a.*, t.*)))::information_schema.character_data AS interval_type, (NULL::integer)::information_schema.cardinal_number AS interval_precision, (NULL::character varying)::information_schema.sql_identifier AS character_set_catalog, (NULL::character varying)::information_schema.sql_identifier AS character_set_schema, (NULL::character varying)::information_schema.sql_identifier AS character_set_name, (
CASE
WHEN (nco.nspname IS NOT 

Re: [HACKERS] pg_dump --pretty-print-views

2013-01-30 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 01/30/2013 11:03 AM, Tom Lane wrote:
 FWIW, I'd vote for not enabling that by default --- it's basically an
 unwarranted assumption about how wide people's terminal windows are.

 Well, we could actually set the wrap value to 0, which would mean always 
 wrap. That wouldn't be making any assumption about the user's terminal 
 window size ;-)

I could live with that.  I rather wonder actually if WRAP_COLUMN_DEFAULT
shouldn't be removed in favor of using 0 as the default.  80-column
windows may be our coding standard for Postgres, but that doesn't mean
they're not pretty passe elsewhere.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review : Add hooks for pre- and post-processor executables for COPY and \copy

2013-01-30 Thread Amit Kapila
On Wednesday, January 23, 2013 5:36 PM Etsuro Fujita wrote:
 Hi Amit,

 Thank you for your review.  I’ve rebased and updated the patch.  Please
find attached the patch.

 Test case issues: 
 -- 
 1. Broken pipe is not handled in case of psql \copy command; 
     Issue are as follows: 
         Following are verified on SuSE-Linux 10.2. 
         1) psql is exiting when \COPY xxx TO command is issued and
command/script is not found 
                 When popen is called in write mode it is creating valid
file descriptor and when it tries to write to file Broken pipe error is 
coming which is not handled. 
                         psql# \copy pgbench_accounts TO PROGRAM
'../compress.sh pgbench_accounts4.txt' 
         2) When \copy command is in progress then program/command is
killed/crashed due to any problem 
            psql is exiting.

This is a headache.  I have no idea how to solve this.

I think we can keep it for committer to take a call on this issue.
The other changes done by you in revised patch are fine. 

I have found few more minor issues as below:

1. The comment above do_copy can be modified to address the new
functionality it can handle. 
/* 
 * Execute a \copy command (frontend copy). We have to open a file, then 
 * submit a COPY query to the backend and either feed it data from the 
 * file or route its response into the file. 
 */ 
bool 
do_copy(const char *args) 


2. 
@@ -256,8 +273,14 @@ do_copy(const char *args) 
+if (options-file == NULL  options-program) 
+{ 
+psql_error(program is not supported to stdout/pstdout or
from stdin/pstdin\n); 
+return false; 
+} 

should call free_copy_options(options); before return false; 

3. \copy command doesn't need semicolon at end, however it was working
previous to your patch, but 
   now it is giving error. 
postgres=# \copy t1 from 'e:\pg_git_code\Data\t1_Data.txt'; 
e:/pg_git_code/Data/t1_Data.txt';: No such file or directory 
e:/pg_git_code/Data/t1_Data.txt';: No such file or directory 

4. Please check if OpenPipeStream() it needs to call 
   if (ReleaseLruFile()), 

5. Following in copy.sgml can be changed to make more meaningful as the
first line looks little adhoc. 
+ para 
+  The command that input comes from or that output goes to. 
+  The command for COPY FROM, which input comes from, must write its
output 
+  to standard output.  The command for COPY TO, which output goes to,
must 
+  read its input from standard input. 
+ /para 


6. Can we have one example of this new syntax, it can make it more
meaningful.
 
With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum not prioritising for-wraparound tables

2013-01-30 Thread Josh Berkus

 You can get nearly all the benefits of your sane settings just by
 increasing autovacuum_freeze_max_age and leaving vacuum_freeze_min_age
 alone.  (Assuming the table doesn't get vacuumed for other reasons)

Correct, it's the ratio that matters.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: pgbench - aggregation of info written into log

2013-01-30 Thread Tatsuo Ishii
 On Thu, Jan 17, 2013 at 7:43 PM, Tatsuo Ishii is...@postgresql.org wrote:
 So if my understating is correct, 1)Tomas Vondra commits to work on
 Windows support for 9.4, 2)on the assumption that one of Andrew
 Dunstan, Dave Page or Magnus Hagander will help him in Windows
 development.

 Ok? If so, I can commit the patch for 9.3 without Windows support. If
 not, I will move the patch to next CF (for 9.4).

 Please correct me if I am wrong.
 
 +1 for this approach.  I agree with Dave and Magnus that we don't want
 Windows to become a second-class platform, but this patch isn't making
 it so.  The #ifdef that peeks inside of an instr_time is already
 there, and it's not Tomas's fault that nobody has gotten around to
 fixing it before now.
 
 Right.
 
 OTOH, I think that this sort of thing is quite wrong:
 
 +#ifndef WIN32
 +--aggregate-interval NUM\n
 + aggregate data over NUM seconds\n
 +#endif
 
 The right approach if this can't be supported on Windows is to still
 display the option in the --help output, and to display an error
 message if the user tries to use it, saying that it is not currently
 supported on Windows.  That fact should also be mentioned in the
 documentation.
 
 Agreed. This seems to be much better approach.
 
 Here is the new patch.

Committed (with minor fix).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers