[HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Pavel Stehule
Hello,

there is a request on enhancing of pg_backup to produce a conditional
DROPs. A reason for this request is more simple usage in very dynamic
production - cloud BI solution.

pg_backup can have a new option --conditional-drops and then pg_dump
will produce a DROP IF EXISTS statements instead DROP statements.

Ideas, comments?

Regards

Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Torello Querci
2011/11/15 Pavel Stehule pavel.steh...@gmail.com:
 Hello,

 there is a request on enhancing of pg_backup to produce a conditional
 DROPs. A reason for this request is more simple usage in very dynamic
 production - cloud BI solution.

 pg_backup can have a new option --conditional-drops and then pg_dump
 will produce a DROP IF EXISTS statements instead DROP statements.

 Ideas, comments?

I think that if there is other way to get the same result in other way
is better to use it without add new options.

In this case, if you are on unix environment, I suppose that you can
use external batch to manipulate the output file, like sed I
suppose.
Obviusly this is my personal opinion.


Best Regards
Torello
 Regards

 Pavel Stehule

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Patch for cursor calling with named parameters

2011-11-15 Thread Yeb Havinga

On 2011-11-14 15:45, Yeb Havinga wrote:

On 2011-10-15 07:41, Tom Lane wrote:

Yeb Havingayebhavi...@gmail.com  writes:

Hello Royce,
Thanks again for testing.

I looked this patch over but concluded that it's not ready to apply,
mainly because there are too many weird behaviors around error
reporting.


Thanks again for the review and comments. Attached is v3 of the patch 
that addresses all of the points made by Tom. In the regression test I 
added a section under --- START ADDITIONAL TESTS that might speedup 
testing.


Please disregard the previous patch: besides that it contained an unused 
function, it turned out my statement that all of Tom's points were 
addressed was not true - the attached patch fixes the remaining issue of 
putting two kinds of errors at the correct start of the current argument 
location.


I also put some more comments in the regression test section: mainly to 
assist providing testcases for review, not for permanent inclusion.


To address a corner case of the form 'p1 := 1 -- comments\n, p2 := 2' it 
was necessary to have read_sql_construct not trim trailing whitespace, 
since that results in an expression of the form '1 -- comments, 2' which 
is wrong.


regards,
Yeb Havinga

diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index f33cef5..17c04d2
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*** OPEN curs1 FOR EXECUTE 'SELECT * FROM '
*** 2823,2833 
 /para
   /sect3
  
! sect3
   titleOpening a Bound Cursor/title
  
  synopsis
! OPEN replaceablebound_cursorvar/replaceable optional ( replaceableargument_values/replaceable ) /optional;
  /synopsis
  
   para
--- 2823,2833 
 /para
   /sect3
  
! sect3 id=plpgsql-open-bound-cursor
   titleOpening a Bound Cursor/title
  
  synopsis
!  OPEN replaceablebound_cursorvar/replaceable optional ( optional replaceableargname/replaceable := /optional replaceableargument_value/replaceable optional, .../optional ) /optional;
  /synopsis
  
   para
*** OPEN replaceablebound_cursorvar/repla
*** 2847,2856 
--- 2847,2867 
   /para
  
   para
+   Cursors may be opened using either firsttermnamed/firstterm or
+   firsttermpositional/firstterm notation. In contrast with calling
+   functions, described in xref linkend=sql-syntax-calling-funcs, it
+   is not allowed to mix positional and named notation. In positional
+   notation, all arguments are specified in order. In named notation,
+   each argument's name is specified using literal:=/literal to
+   separate it from the argument expression.
+  /para
+ 
+  para
Examples (these use the cursor declaration examples above):
  programlisting
  OPEN curs2;
  OPEN curs3(42);
+ OPEN curs3(key := 42);
  /programlisting
   /para
  
*** COMMIT;
*** 3169,3186 
  
  synopsis
  optional lt;lt;replaceablelabel/replaceablegt;gt; /optional
! FOR replaceablerecordvar/replaceable IN replaceablebound_cursorvar/replaceable optional ( replaceableargument_values/replaceable ) /optional LOOP
  replaceablestatements/replaceable
  END LOOP optional replaceablelabel/replaceable /optional;
  /synopsis
  
   The cursor variable must have been bound to some query when it was
!  declared, and it emphasiscannot/ be open already.  The
!  commandFOR/ statement automatically opens the cursor, and it closes
!  the cursor again when the loop exits.  A list of actual argument value
!  expressions must appear if and only if the cursor was declared to take
!  arguments.  These values will be substituted in the query, in just
!  the same way as during an commandOPEN/.
   The variable replaceablerecordvar/replaceable is automatically
   defined as type typerecord/ and exists only inside the loop (any
   existing definition of the variable name is ignored within the loop).
--- 3180,3203 
  
  synopsis
  optional lt;lt;replaceablelabel/replaceablegt;gt; /optional
! FOR replaceablerecordvar/replaceable IN replaceablebound_cursorvar/replaceable optional ( optional replaceableargname/replaceable := /optional replaceableargument_value/replaceable optional, .../optional ) /optional LOOP
  replaceablestatements/replaceable
  END LOOP optional replaceablelabel/replaceable /optional;
  /synopsis
  
   The cursor variable must have been bound to some query when it was
!  declared, and it emphasiscannot/ be open already.  The commandFOR/
!  statement automatically opens the cursor, and it closes the cursor again
!  when the loop exits.  The cursors arguments may be supplied in either
!  firsttermnamed/firstterm or firsttermpositional/firstterm
!  notation.  A list of actual argument value expressions must appear if and
!  only if the cursor was declared to take arguments.  These values will 

Re: [HACKERS] strict aliasing

2011-11-15 Thread Florian Weimer
* Andres Freund:

 I don't gcc will ever be able to call all possible misusages. E.g. The
 List api is a case where its basically impossible to catch everything
 (as gcc won't be able to figure out what the ListCell.data.ptr_value
 pointed to originally in the general case).

Correct, if code is not strict-aliasing-safe and you compile with
-f-strict-aliasing, GCC may silently produce wrong code.  (Same with
-fwrapv, by the way.)

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests fail once XID counter exceeds 2 billion

2011-11-15 Thread Simon Riggs
On Sun, Nov 13, 2011 at 11:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 While investigating bug #6291 I was somewhat surprised to discover
 $SUBJECT.  The cause turns out to be this kluge in alter_table.sql:

        select virtualtransaction
        from pg_locks
        where transactionid = txid_current()::integer

...

 that plasters on the appropriate epoch value for an
 assumed-to-be-current-or-recent xid, and returns something that squares
 with the txid_snapshot functions.  Then the test could be coded without
 kluges as

That fixes the test, but it doesn't fix the unreasonability of this situation.

We need a function called transactionid_current() so a normal user can write

   select virtualtransaction
   from pg_locks
   where transactionid = transactionid_current()

and have it just work.

We need a function whose behaviour matches xid columns in pg_locks and
elsewhere and that doesn't need to have anything to do with txid
datatype.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor optimisation of XLogInsert()

2011-11-15 Thread Simon Riggs
Patch adds cacheline padding around parts of XLogCtl.

Idea from way back, but never got performance tested in a situation
where WALInsertLock was the bottleneck.

So this is speculative, in need of testing to investigate value.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


XLogInsert_cacheline_opt.v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 there is a request on enhancing of pg_backup to produce a conditional
 DROPs. A reason for this request is more simple usage in very dynamic
 production - cloud BI solution.

 pg_backup can have a new option --conditional-drops and then pg_dump
 will produce a DROP IF EXISTS statements instead DROP statements.

That is not going to be possible unless we commit to having an IF EXISTS
option for every type of DROP statement, now and in the future.
Even then, it's not apparent to me that it solves any real use-case.
You're probably better off just using --clean and ignoring any errors.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests fail once XID counter exceeds 2 billion

2011-11-15 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 We need a function called transactionid_current() so a normal user can write

select virtualtransaction
from pg_locks
where transactionid = transactionid_current()

 and have it just work.

That would solve that one specific use-case.  The reason I suggested
txid_from_xid is that it could also be used to compare XIDs seen in
tuples to members of a txid_snapshot, which is not possible now.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Pavel Stehule
2011/11/15 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 there is a request on enhancing of pg_backup to produce a conditional
 DROPs. A reason for this request is more simple usage in very dynamic
 production - cloud BI solution.

 pg_backup can have a new option --conditional-drops and then pg_dump
 will produce a DROP IF EXISTS statements instead DROP statements.

 That is not going to be possible unless we commit to having an IF EXISTS
 option for every type of DROP statement, now and in the future.
 Even then, it's not apparent to me that it solves any real use-case.
 You're probably better off just using --clean and ignoring any errors.


ook

Regards

Pavel Stehule

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disable OpenSSL compression

2011-11-15 Thread Albe Laurenz
Bruce Momjian wrote:
 I'd still be willing to write a patch for a client-only solution.
 
 Agreed.  There is clearly a win in turning off SSL compression for
 certain workloads, and if people think the client is the right
location,
 then let's do it there.

Here it is. I'll add it to the November commitfest.

Yours,
Laurenz Albe


ssl.patch
Description: ssl.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IDLE in transaction introspection

2011-11-15 Thread Scott Mead
On Thu, Nov 10, 2011 at 2:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Bruce Momjian br...@momjian.us writes:
  Well, we could use an optional details string for that.  If not, we
  are still using the magic-string approach, which I thought we didn't
  like.

 No, we're not using magic strings, we're using an enum --- maybe not an
 officially declared enum type, but it's a column with a predetermined
 set of possible values.  It would be a magic string if it were still in
 the query field and thus confusable with user-written queries.


Fell off the map last week, here's v2 that:
 * RUNNING = active
 * all states from CAPS to lower case

--
  Scott Mead
   OpenSCG http://www.openscg.com


regards, tom lane



pg_stat_activity_state_query.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 9:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Pavel Stehule pavel.steh...@gmail.com writes:
 there is a request on enhancing of pg_backup to produce a conditional
 DROPs. A reason for this request is more simple usage in very dynamic
 production - cloud BI solution.

 pg_backup can have a new option --conditional-drops and then pg_dump
 will produce a DROP IF EXISTS statements instead DROP statements.

 That is not going to be possible unless we commit to having an IF EXISTS
 option for every type of DROP statement, now and in the future.
 Even then, it's not apparent to me that it solves any real use-case.
 You're probably better off just using --clean and ignoring any errors.

Ignoring errors sucks, though, because sometimes you want the whole
thing to succeed or fail as a unit.

I'm wondering why we need an option for this, though.  Assuming we
make DROP IF EXISTS work anywhere that it doesn't already, why not
just always produce that rather than straight DROP?  It seems
categorically better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Yeb Havinga

On 2011-10-05 00:45, Royce Ausburn wrote:

Attached is v2 of my patch that doesn't update CATALOG_VERSION_NO.  I've also fixed the name of an 
argument to pgstat_report_vacuum which I don't think was particularly good, and I've replace the 
word tuple with row in some docs I added for consistency.




Hello Royce,

I reviewed your patch. I think it is in good shape, my two main remarks 
(name of n_unremovable_tup and a remark about documentation at the end 
of this review) are highly subjective and I wouldn't spend time on it 
unless other people have the same opinion.


Remarks:

* rules regression test fails because pg_stat_all_tables is changed, 
pg_stat_sys_tables and pg_stat_user_tables as well, but the new 
expected/rules.out is not part of the patch.


* I'm not sure about the name n_unremovable_tup, since it doesn't convey 
it is about dead tuples and judging from only the name it might as well 
include the live tuples. It also doesn't hint that it is a transient 
condition, which vacuum verbose does with the word 'yet'.
What about n_locked_dead_tup? - this contains both information that it 
is about dead tuples, and the lock suggests that once the lock is 
removed, the dead tuple can be removed.


* The number shows correctly in the pg_stat_relation. This is a testcase 
that gives unremovable dead rows:


A:
create table b (a int);
insert into b values (1);

B:
begin transaction ISOLATION LEVEL repeatable read;
select * from b;

A:
update t set b=2 where i=10;
vacuum verbose t;

Then something similar is shown:

INFO:  vacuuming public.t
INFO:  index t_pkey now contains 1 row versions in 2 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  t: found 0 removable, 2 nonremovable row versions in 1 out of 1 
pages

DETAIL:  1 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.01u sec elapsed 0.00 sec.
VACUUM
t=# \x
Expanded display is on.
t=# select * from pg_stat_user_tables;
-[ RECORD 1 ]-+--
relid | 16407
schemaname| public
relname   | t
seq_scan  | 1
seq_tup_read  | 0
idx_scan  | 1
idx_tup_fetch | 1
n_tup_ins | 1
n_tup_upd | 1
n_tup_del | 0
n_tup_hot_upd | 1
n_live_tup| 2
n_dead_tup| 0
n_unremovable_tup | 1
last_vacuum   | 2011-11-05 21:15:06.939928+01
last_autovacuum   |
last_analyze  |
last_autoanalyze  |
vacuum_count  | 1
autovacuum_count  | 0
analyze_count | 0
autoanalyze_count | 0

I did some more tests with larger number of updates which revealed no 
unexpected results.


I was puzzled for a while that n_unremovable_tup = n_dead_tup doesn't 
hold, after all, the unremovable tuples are dead as well. Neither the 
current documentation nor the documentation added by the patch do help 
in explaining the meaning of n_dead_tup and n_unremovable_tup, which may 
be clear to seasoned vacuum hackers, but not to me. In both the case of 
n_dead_tup it would have been nice if the docs mentioned that dead 
tuples are tuples that are deleted or previous versions of updated 
tuples, and that only analyze updates n_dead_tup (since vacuum cleans 
them), in contrast with n_unremovable_tup that gets updated by vacuum. 
Giving an example of how unremovable dead tuples can be caused would 
IMHO also help understanding.


thanks,
Yeb Havinga


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 10:05 AM, Yeb Havinga yebhavi...@gmail.com wrote:
 I reviewed your patch. I think it is in good shape, my two main remarks
 (name of n_unremovable_tup and a remark about documentation at the end of
 this review) are highly subjective and I wouldn't spend time on it unless
 other people have the same opinion.

I share your opinion; it's not obvious to me what this means either.
I guess this is a dumb question, but why don't we remove all the dead
tuples?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Yeb Havinga

On 2011-11-15 16:16, Robert Haas wrote:

On Tue, Nov 15, 2011 at 10:05 AM, Yeb Havingayebhavi...@gmail.com  wrote:

I reviewed your patch. I think it is in good shape, my two main remarks
(name of n_unremovable_tup and a remark about documentation at the end of
this review) are highly subjective and I wouldn't spend time on it unless
other people have the same opinion.

I share your opinion; it's not obvious to me what this means either.
I guess this is a dumb question, but why don't we remove all the dead
tuples?

The only case I could think of was that a still running repeatable read 
transaction read them before they were deleted and committed by another 
transaction.


--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Alvaro Herrera

Excerpts from Robert Haas's message of mar nov 15 12:16:54 -0300 2011:
 
 On Tue, Nov 15, 2011 at 10:05 AM, Yeb Havinga yebhavi...@gmail.com wrote:
  I reviewed your patch. I think it is in good shape, my two main remarks
  (name of n_unremovable_tup and a remark about documentation at the end of
  this review) are highly subjective and I wouldn't spend time on it unless
  other people have the same opinion.
 
 I share your opinion; it's not obvious to me what this means either.
 I guess this is a dumb question, but why don't we remove all the dead
 tuples?

They were deleted but there are transactions with older snapshots.

I think vacuum uses the term nondeletable or nonremovable.  Not sure
which one is less bad.  Not being a native speaker, they all sound
horrible to me.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Alvaro Herrera

Excerpts from Robert Haas's message of mar nov 15 11:59:17 -0300 2011:
 On Tue, Nov 15, 2011 at 9:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Pavel Stehule pavel.steh...@gmail.com writes:
  there is a request on enhancing of pg_backup to produce a conditional
  DROPs. A reason for this request is more simple usage in very dynamic
  production - cloud BI solution.
 
  pg_backup can have a new option --conditional-drops and then pg_dump
  will produce a DROP IF EXISTS statements instead DROP statements.
 
  That is not going to be possible unless we commit to having an IF EXISTS
  option for every type of DROP statement, now and in the future.
  Even then, it's not apparent to me that it solves any real use-case.
  You're probably better off just using --clean and ignoring any errors.
 
 Ignoring errors sucks, though, because sometimes you want the whole
 thing to succeed or fail as a unit.
 
 I'm wondering why we need an option for this, though.  Assuming we
 make DROP IF EXISTS work anywhere that it doesn't already, why not
 just always produce that rather than straight DROP?  It seems
 categorically better.

I think there's a fuzzy idea that we should try to keep our dumps
vaguely compatible with other systems.  If we add DROP IF EXISTS
unconditionally, there would be no way to make them run elsewhere.

Of course, our dumps already fail for a lot of reasons (for example SET
commands and COPY), but I think if you dump with inserts and COPY and
have the other server ignore errors found while processing the script,
the idea is that you should find that mostly it loads the tables and
data.  I don't know how well this principle works for the DROP commands.

I wonder if that instead of trying to remain somewhat compatible to
other systems we should instead have a mode specifically designed for
that --one which didn't output SET or backslash commands, used inserts
rather than COPY, etc-- and have the noncompatible mode offer nice
features such as DROP IF EXISTS and the like.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Greg Smith

On 11/15/2011 10:29 AM, Alvaro Herrera wrote:

They were deleted but there are transactions with older snapshots.
I think vacuum uses the term nondeletable or nonremovable.  Not sure
which one is less bad.  Not being a native speaker, they all sound
horrible to me.
   


I would go more for something like deadinuse.  Saying they are 
unremovable isn't very helpful because it doesn't lead the user to 
knowing why.  If the name gives some suggestion as to why they are 
unremovable--in this case that they are still potentially visible and 
usable by old queries--that would be a useful naming improvement to me.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch : Allow toast tables to be moved to a different tablespace

2011-11-15 Thread Jaime Casanova
On Tue, Nov 15, 2011 at 11:08 AM, Julien Tachoires jul...@gmail.com wrote:

 Maybe I'd missed something, but the normal case is :
 ALTER TABLE ... SET TABLESPACE = moves Table + moves associated TOAST Table
 ALTER TABLE ... SET TABLE TABLESPACE = moves Table  keeps associated
 TOAST Table at its place
 ALTER TABLE ... SET TOAST TABLESPACE = keeps Table at its place 
 moves associated TOAST Table


oh! i didn't test the patch just read it... and maybe i misunderstood,
will see it again.

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FlexLocks

2011-11-15 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 I'm not necessarily saying that any of these particular
 things are what we want to do, just throwing out the idea that we
 may want a variety of lock types that are similar to lightweight
 locks but with subtly different behavior, yet with common
 infrastructure for error handling and wait queue management.
 
The locking in the clog area is pretty funky.  I bet we could craft
a special flavor of FlexLock to make that cleaner.  And I would be
surprised if some creative thinking didn't yield a far better FL
scheme for SSI than we can manage with existing LW locks.
 
Your description makes sense to me, and your numbers prove the value
of the concept.  Whether there's anything in the implementation I
would quibble about will take some review time.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IDLE in transaction introspection

2011-11-15 Thread Greg Smith

On 11/15/2011 09:44 AM, Scott Mead wrote:

Fell off the map last week, here's v2 that:
 * RUNNING = active
 * all states from CAPS to lower case



This looks like what I was hoping someone would add here now.  Patch 
looks good, only issue I noticed was a spaces instead of a tab goof 
where you set beentry_st_state at line 2419 in 
src/backend/postmaster/pgstat.c


Missing pieces:

-There is one regression test that uses pg_stat_activity that is broken now.
-The documentation should list the new field and all of the states it 
might include.  That's a serious doc update from the minimal info 
available right now.


I know this issue has been beat up already some, but let me summarize 
and extend that thinking a moment.  I see two equally valid schools of 
thought on how exactly to deal with introducing this change:


-Add the new state field just as you've done it, but keep updating the 
query text anyway.  Do not rename current_query.  Declare the 
overloading of current_query as both a state and the query text to be 
deprecated in 9.3.  This would keep existing tools working fine against 
9.2 and give a clean transition period.


-Forget about backward compatibility and just put all the breaking stuff 
we've been meaning to do in here.  If we're going to rename 
current_query to query--what Scott's patch does here--that will force 
all code using pg_stat_activity to be rewritten.  This seems like the 
perfect time to also change procpid to pid, finally blow away that wart.


I'll happily update all of the tools and samples I deal with to support 
this change.  Most of the ones I can think of will be simplified; 
they're already parsing query_text and extracting the implicit state.  
Just operating on an explicit one instead will be simpler and more robust.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Joshua Berkus
Greg,

 I'm not attached to the name, which I just pulled out of the air for
 the
 documentation.  Could just as easily call them built-in modules or
 extensions.  If the objection is that extensions isn't technically
 correct for auto-explain, you might call them core add-ons instead.
  My
 thinking was that the one exception didn't make it worth the trouble
 to
 introduce a new term altogether here.  There's already too many terms
 used for talking about this sort of thing, the confusion from using a
 word other than extensions seemed larger than the confusion sown by
 auto-explain not fitting perfectly.

Well, I do think it should be *something* Extensions.  But Core Extensions 
implies that the other stuff is just random code, and makes the user wonder why 
it's included at all.  If we're going to rename some of the extensions, then we 
really need to rename them all or we look like those are being depreciated.

Maybe:

Core Management Extensions
Core Development Extensions
Additional Database Tools
Code Examples
Legacy Modules

I think that covers everything we have in contrib.

Given discussion, is there any point in reporting on the actual patch yet?

--Josh Berkus


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Joshua Berkus
Peter,

 I consider contrib/isn to be quite broken. It hard codes ISBN
 prefixes
 for the purposes of sanitising ISBNs, even though their assignment is
 actually controlled by a decentralised body of regional authorities.
 I'd vote for kicking it out of contrib.

Submit a patch to fix it then.  

I use ISBN in 2 projects, and it's working fine for me.  I'll strongly resist 
any attempt to kick it out.

--Josh Berkus

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 10:36 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 I'm wondering why we need an option for this, though.  Assuming we
 make DROP IF EXISTS work anywhere that it doesn't already, why not
 just always produce that rather than straight DROP?  It seems
 categorically better.

 I think there's a fuzzy idea that we should try to keep our dumps
 vaguely compatible with other systems.  If we add DROP IF EXISTS
 unconditionally, there would be no way to make them run elsewhere.

 Of course, our dumps already fail for a lot of reasons (for example SET
 commands and COPY), but I think if you dump with inserts and COPY and
 have the other server ignore errors found while processing the script,
 the idea is that you should find that mostly it loads the tables and
 data.  I don't know how well this principle works for the DROP commands.

Well, except in --clean mode, we don't emit DROP commands at all.  And
since --clean doesn't even work well on PostgreSQL, I can't get too
excited about whether it will work everywhere else.

 I wonder if that instead of trying to remain somewhat compatible to
 other systems we should instead have a mode specifically designed for
 that --one which didn't output SET or backslash commands, used inserts
 rather than COPY, etc-- and have the noncompatible mode offer nice
 features such as DROP IF EXISTS and the like.

mysqldump has a --compatible=OTHER_DB_SYSTEM flag (postgresql is one
of the choices).  That might not be a crazy way to approach the
problem, though possibly we'd want --compatible=FOO to be a shorthand
for a collection of behaviors that could alternatively be selected
individually via appropriately named long options
(--no-backslash-commands, --no-set-commands, etc.).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Christopher Browne
On Tue, Nov 15, 2011 at 10:36 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:

 Excerpts from Robert Haas's message of mar nov 15 11:59:17 -0300 2011:
 On Tue, Nov 15, 2011 at 9:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Pavel Stehule pavel.steh...@gmail.com writes:
  there is a request on enhancing of pg_backup to produce a conditional
  DROPs. A reason for this request is more simple usage in very dynamic
  production - cloud BI solution.
 
  pg_backup can have a new option --conditional-drops and then pg_dump
  will produce a DROP IF EXISTS statements instead DROP statements.
 
  That is not going to be possible unless we commit to having an IF EXISTS
  option for every type of DROP statement, now and in the future.
  Even then, it's not apparent to me that it solves any real use-case.
  You're probably better off just using --clean and ignoring any errors.

 Ignoring errors sucks, though, because sometimes you want the whole
 thing to succeed or fail as a unit.

 I'm wondering why we need an option for this, though.  Assuming we
 make DROP IF EXISTS work anywhere that it doesn't already, why not
 just always produce that rather than straight DROP?  It seems
 categorically better.

 I think there's a fuzzy idea that we should try to keep our dumps
 vaguely compatible with other systems.  If we add DROP IF EXISTS
 unconditionally, there would be no way to make them run elsewhere.

Well, it seems to me there's a mixed signal here.

- When operating with Postgres - Postgres, the suggestions are stuff like

   Oh, you can just ignore these errors
   Oh, you can just use sed to change things to play better

I think I'd rather have *some* option here of having there be some
benefit to PG-PG.  I'd rather hear things like...

  OK, if you're using some other database, you can just ignore these errors
  OK, if you're using some other database that doesn't know about
DROP IF EXISTS, then you can just use sed to fix that

 Of course, our dumps already fail for a lot of reasons (for example SET
 commands and COPY), but I think if you dump with inserts and COPY and
 have the other server ignore errors found while processing the script,
 the idea is that you should find that mostly it loads the tables and
 data.  I don't know how well this principle works for the DROP commands.

Back in either 8.1 or 8.2, I think it was, we added in a pretty
complete set of DROP IF EXISTS commands.

While I am not keen on adding 250 options to pg_dump, I think it's not
the most wonderful thing in the world to need to anticipate failures.

I'd rather have *some* controls that do NOT involve needing to write a
parser of a fragile combination of SQL as generated by pg_dump.

 I wonder if that instead of trying to remain somewhat compatible to
 other systems we should instead have a mode specifically designed for
 that --one which didn't output SET or backslash commands, used inserts
 rather than COPY, etc-- and have the noncompatible mode offer nice
 features such as DROP IF EXISTS and the like.

+1 on that, yeah.

The part that'll be nasty-ish is the question of to what degree we'd
like to be cross-PG-version compatible.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 12:54 PM, Joshua Berkus j...@agliodbs.com wrote:
 I consider contrib/isn to be quite broken. It hard codes ISBN
 prefixes
 for the purposes of sanitising ISBNs, even though their assignment is
 actually controlled by a decentralised body of regional authorities.
 I'd vote for kicking it out of contrib.

 Submit a patch to fix it then.

It's not fixable.  The ISBN datatype is the equivalent of having an
SSN datatype that only allows SSNs that have actually been assigned to
a US citizen.

 I use ISBN in 2 projects, and it's working fine for me.  I'll strongly resist 
 any attempt to kick it out.

That's exactly why contrib is a random amalgamation of really useful
stuff and utter crap: people feel justified in defending the continued
existence of the crap on the sole basis that it's useful to them
personally.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IDLE in transaction introspection

2011-11-15 Thread Robert Treat
On Tue, Nov 15, 2011 at 12:00 PM, Greg Smith g...@2ndquadrant.com wrote:
 On 11/15/2011 09:44 AM, Scott Mead wrote:

 Fell off the map last week, here's v2 that:
  * RUNNING = active
  * all states from CAPS to lower case


 This looks like what I was hoping someone would add here now.  Patch looks
 good, only issue I noticed was a spaces instead of a tab goof where you set
 beentry_st_state at line 2419 in src/backend/postmaster/pgstat.c

 Missing pieces:

 -There is one regression test that uses pg_stat_activity that is broken now.
 -The documentation should list the new field and all of the states it might
 include.  That's a serious doc update from the minimal info available right
 now.

 I know this issue has been beat up already some, but let me summarize and
 extend that thinking a moment.  I see two equally valid schools of thought
 on how exactly to deal with introducing this change:

 -Add the new state field just as you've done it, but keep updating the query
 text anyway.  Do not rename current_query.  Declare the overloading of
 current_query as both a state and the query text to be deprecated in 9.3.
  This would keep existing tools working fine against 9.2 and give a clean
 transition period.

 -Forget about backward compatibility and just put all the breaking stuff
 we've been meaning to do in here.  If we're going to rename current_query to
 query--what Scott's patch does here--that will force all code using
 pg_stat_activity to be rewritten.  This seems like the perfect time to also
 change procpid to pid, finally blow away that wart.


+1

 I'll happily update all of the tools and samples I deal with to support this
 change.  Most of the ones I can think of will be simplified; they're already
 parsing query_text and extracting the implicit state.  Just operating on an
 explicit one instead will be simpler and more robust.


lmk if you need help, we'll be doing this with some of our tools /
projects anyway, it probably wouldn't hurt to coordinate.


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Greg Smith

On 11/15/2011 12:53 PM, Joshua Berkus wrote:

Given discussion, is there any point in reporting on the actual patch yet?


I don't expect the discussion will alter the code changes that are the 
main chunk of the patch here.  The only place the most disputed parts 
impact is the documentation.


I like Management Extensions as an alternate name for this category 
instead, even though it still has the issue that auto_explain isn't 
technically an extension.  The name does help suggest why they're thrown 
into a different directory and package.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 10:29 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of mar nov 15 12:16:54 -0300 2011:
 On Tue, Nov 15, 2011 at 10:05 AM, Yeb Havinga yebhavi...@gmail.com wrote:
  I reviewed your patch. I think it is in good shape, my two main remarks
  (name of n_unremovable_tup and a remark about documentation at the end of
  this review) are highly subjective and I wouldn't spend time on it unless
  other people have the same opinion.

 I share your opinion; it's not obvious to me what this means either.
 I guess this is a dumb question, but why don't we remove all the dead
 tuples?

 They were deleted but there are transactions with older snapshots.

Oh.  I was thinking dead meant no longer visible to anyone.   But
it sounds what we call unremovable here is what we elsewhere call
recently dead.

 I think vacuum uses the term nondeletable or nonremovable.  Not sure
 which one is less bad.  Not being a native speaker, they all sound
 horrible to me.

nondeletable is surely terrible, since they may well have got into
this state by being deleted.  nonremovable is better, but still not
great.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Bruce Momjian
Robert Haas wrote:
  I use ISBN in 2 projects, and it's working fine for me. ?I'll strongly 
  resist any attempt to kick it out.
 
 That's exactly why contrib is a random amalgamation of really useful
 stuff and utter crap: people feel justified in defending the continued
 existence of the crap on the sole basis that it's useful to them
 personally.

Agreed.  Berkus must have one million customers to have X customers
using every feature we want to remove or change.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 Joshua Berkus j...@agliodbs.com wrote:
 
 I consider contrib/isn to be quite broken. It hard codes ISBN
 prefixes for the purposes of sanitising ISBNs, even though their
 assignment is actually controlled by a decentralised body of
 regional authorities.
 
By an international standard which says what numbers are valid in
the prefix element and registration group element of the ISBN
for each of those regional authorities, and how the check digit is
to be calculated.
 
 I'd vote for kicking it out of contrib.

 Submit a patch to fix it then.
 
 It's not fixable.  The ISBN datatype is the equivalent of having
 an SSN datatype that only allows SSNs that have actually been
 assigned to a US citizen.
 
Certainly it would make sense to go so far as to support the overall
standard format as described here:
 
http://www.isbn-international.org/faqs/view/5#q_5
 
Beyond the broad strokes there, perhaps it would make sense for the
type to be able to digest a RangeMessage.xml file supplied by the
standards organization, so that the current ranges could be plugged
in as needed independently of the PostgreSQL release.
 
http://www.isbn-international.org/page/ranges
http://www.isbn-international.org/pages/media/Range%20message/RangeMessage.pdf
 
Hard-coding ranges as of some moment in time seems pretty dubious.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Christopher Browne
On Tue, Nov 15, 2011 at 1:33 PM, Robert Haas robertmh...@gmail.com wrote:
 nondeletable is surely terrible, since they may well have got into
 this state by being deleted.  nonremovable is better, but still not
 great.

Bit of brain storm, including looking over to terminology used for
garbage collection:
- stillreferenceable
- notyetremovable
- referenceable
- reachable

Perhaps those suggest some option that is a bit less horrible?  I
think I like referenceable best, of those.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Josh Berkus

 Submit a patch to fix it then.
 
 It's not fixable.  The ISBN datatype is the equivalent of having an
 SSN datatype that only allows SSNs that have actually been assigned to
 a US citizen.

Nothing is not fixable.  not fixable without breaking backwards
compatibility is entirely possible, though.  If fixing it led to two
different versions of ISN, then that would be a reason to push it to
PGXN instead of shipping it.

It's not as if ISN is poorly coded. This is a spec issue, which must
have been debated when we first included it.  No?

 That's exactly why contrib is a random amalgamation of really useful
 stuff and utter crap: people feel justified in defending the continued
 existence of the crap on the sole basis that it's useful to them
 personally.

Why else would we justify anything?  It's very difficult to argue on the
basis of theoretical users.  How would we really know what a theoretical
user wants?

Calling something crap because it has a spec issue is unwarranted.
We're going to get nowhere in this discussion as long as people are
using extreme and non-descriptive terms.

The thing is, most of the extensions in /contrib have major flaws, or
they would have been folded in to the core code by now.  CITEXT doesn't
support multiple collations.  INTARRAY and LTREE have inconsistent
operators and many bugs.  CUBE lacks documentation.  DBlink is an
ongoing battle with security holes.  etc.

Picking out one of those and saying this is crap because of reason X,
but I'll ignore all the flaws in all these other extensions is
inconsistent and not liable to produce results.  Now, if you wanted to
argue that we should kick *all* of the portable extensions out of
/contrib and onto PGXN, then you'd have a much stronger argument.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 I wonder if that instead of trying to remain somewhat compatible to
 other systems we should instead have a mode specifically designed for
 that --one which didn't output SET or backslash commands, used inserts
 rather than COPY, etc-- and have the noncompatible mode offer nice
 features such as DROP IF EXISTS and the like.

 mysqldump has a --compatible=OTHER_DB_SYSTEM flag (postgresql is one
 of the choices).  That might not be a crazy way to approach the
 problem, though possibly we'd want --compatible=FOO to be a shorthand
 for a collection of behaviors that could alternatively be selected
 individually via appropriately named long options
 (--no-backslash-commands, --no-set-commands, etc.).

I can't help but recalling Hannu's lightning talk at pgconf.eu in
Amsterdam last month.  What about implementing mysql protocol and syntax
instead, so that users would just use mysqldump here, if that's the
format they do want to play with.

Not the same scale of work, but opening our infrastructure to multiple
syntaxes and protocols could be something to aim for.  Think memcache
protocol backed by hstore or even plv8, too.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FlexLocks

2011-11-15 Thread Simon Riggs
On Tue, Nov 15, 2011 at 1:50 PM, Robert Haas robertmh...@gmail.com wrote:

 It basically
 works like a regular LWLock, except that it has a special operation to
 optimize ProcArrayEndTransaction().  In the uncontended case, instead
 of acquiring and releasing the lock, it just grabs the lock, observes
 that there is no contention, clears the critical PGPROC fields (which
 isn't noticeably slower than updating the state of the lock would be)
 and releases the spin lock.  There's then no need to reacquire the
 spinlock to release the lock; we're done.  In the contended case,
 the backend wishing to end adds itself to a queue of ending
 transactions.  When ProcArrayLock is released, the last person out
 clears the PGPROC structures for all the waiters and wakes them all
 up; they don't need to reacquire the lock, because the work they
 wished to perform while holding it is already done.  Thus, in the
 *worst* case, ending transactions only need to acquire the spinlock
 protecting ProcArrayLock half as often (once instead of twice), and in
 the best case (where backends have to keep retrying only to repeatedly
 fail to get the lock) it's far better than that.

Which is the same locking avoidance technique we already use for sync
rep and for the new group commit patch.

I've been saying for some time that we should use the same technique
for ProcArray and clog also, so we only need to queue once rather than
queue three times at end of each transaction.

I'm not really enthused by the idea of completely rewriting lwlocks
for this. Seems like specialised code is likely to be best, as well as
having less collateral damage.

With that in mind, should we try to fuse the group commit with the
procarraylock approach, so we just queue once and get woken when all
the activities have been handled? If the first woken proc performs the
actions then wakes people further down the queue it could work quite
well.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Peter Geoghegan
On 15 November 2011 18:03, Robert Haas robertmh...@gmail.com wrote:
 It's not fixable.  The ISBN datatype is the equivalent of having an
 SSN datatype that only allows SSNs that have actually been assigned to
 a US citizen.

That isn't even the worst part. isn is basically only useful in the US
at the moment, because in every other country there are a number of
bar code symbologies that co-exist within supply chain management for
various reasons. Only in the US can you reasonably assume that all
articles that you come across will be UPC, and even that might be a
shaky assumption these days.

In the E.U. and much of the rest of the world, products that you buy
in the store will bear one of the following symbologies: EAN-8 (for
small articles like chewing gum), UPC (the American one, 12 digits),
EAN-13 and GTIN-14. Some, but not all of these are available from
contrib/isn. There is no datatype that represents some known barcode
format, even though writing an SQL function that can be used in a
domain check constraint to do just that is next to trivial. I guess
that means that you'd either have to have multiple columns for each
datatype, each existing in case the article in question was of that
particular datatype, or you'd need to make a hard assumption about the
symbology used for all articles that will ever be entered.

The way that these formats maintain backwards compatibility is through
making previous formats valid as the new format by padding zeros to
the left of the older format. So a UPC is actually a valid EAN-13,
just by adding a zero to the start - the US EAN country code is zero,
IIRC, so the rest of the world can pretend that Americans use
EAN-13s/GTIN-14s, even though they think that they use UPCs. The check
digit algorithms for each successive symbology are built such that
this works. This is why a DIY bar code bigint domain can be written so
easily, and also why doing so makes way more sense than using this
contrib module.

To my mind, contrib/isn is a solution looking for a problem, and
that's before we even talk about ISBN prefixes. By including it, we
give users a false sense of security about doing the right thing, when
they're very probably not.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing

2011-11-15 Thread Kevin Grittner
Florian Weimer fwei...@bfk.de wrote:
 * Andres Freund:
 
 I don't gcc will ever be able to call all possible misusages.
 E.g. The List api is a case where its basically impossible to
 catch everything (as gcc won't be able to figure out what the
 ListCell.data.ptr_value pointed to originally in the general
 case).
 
 Correct, if code is not strict-aliasing-safe and you compile with
 -f-strict-aliasing, GCC may silently produce wrong code.  (Same
 with -fwrapv, by the way.)
 
I've spent a little time trying to get my head around what to look
for in terms of unsafe code, but am not really there yet. Meanwhile,
I've run a few more benchmarks of -fstrict-aliasing (without also
changing to the -O3 switch) compared to a normal build.  In no
benchmark so far has strict aliasing by itself performed better on
my i7, and in most cases it is slightly worse.  (This means that
some of the optimizations in -O3 probably *did* have a small net
positive, since the benchmarks combining both showed a gain as long
as there weren't more clients than cores, and the net loss on just
strict aliasing would account for the decrease at higher client
counts.)
 
From my reading, it appears that if we get safe code in terms of
strict aliasing, we might be able to use the restrict keyword to
get further optimizations which bring it to a net win, but I think
there is currently lower-hanging fruit than monkeying with these
compiler options.  I'm letting this go, although I still favor the
const-ifying which started this discussion, on the grounds of API
clarity.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Alvaro Herrera

Excerpts from Robert Haas's message of mar nov 15 15:03:03 -0300 2011:
 On Tue, Nov 15, 2011 at 12:54 PM, Joshua Berkus j...@agliodbs.com wrote:
  I consider contrib/isn to be quite broken. It hard codes ISBN
  prefixes
  for the purposes of sanitising ISBNs, even though their assignment is
  actually controlled by a decentralised body of regional authorities.
  I'd vote for kicking it out of contrib.
 
  Submit a patch to fix it then.
 
 It's not fixable.  The ISBN datatype is the equivalent of having an
 SSN datatype that only allows SSNs that have actually been assigned to
 a US citizen.

Interesting.  Isn't it possible to separate it into two parts, one
containing generic input, formatting and check digits verification, and
another one that would do the prefix matching?  Presumably, the first
part would still be useful without prefix validation, wouldn't it?
Surely the other validation rules aren't different for the various
prefixes.

Perhaps the prefix matching code should not be in C, or at least use a
lookup table that's not in C code, so that it can be updated in userland
without having to recompile.  (BTW, this is very similar to the problem
confronted by the E.164 module, which attempts to do telephone number
validation and formatting; there's a generic body of code that handles
the basic country code validation, but there's supposed to be some
per-CC formatting rules which we're not really clear on where to store.
We punted on it by just having that in a GUC, so that the user can
update it easily; but that's clearly not the best solution).

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Peter Geoghegan
On 15 November 2011 19:01, Josh Berkus j...@agliodbs.com wrote:
 Nothing is not fixable.

My idea of fixing contrib/isn would be to remove so much of it that it
would obviously make more sense to use simple, flexible SQL. It simply
makes way too many assumptions about the user's business rules for a
generic C module.

 Calling something crap because it has a spec issue is unwarranted.
 We're going to get nowhere in this discussion as long as people are
 using extreme and non-descriptive terms.

It is warranted, because contrib/isn is extremely wrong-headed.

 The thing is, most of the extensions in /contrib have major flaws, or
 they would have been folded in to the core code by now.  CITEXT doesn't
 support multiple collations.  INTARRAY and LTREE have inconsistent
 operators and many bugs.  CUBE lacks documentation.  DBlink is an
 ongoing battle with security holes.  etc.

The difference is that it's possible to imagine a scenario under which
I could recommend using any one of those modules, despite their flaws.
I could not contrive a reason for using contrib/isn.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FlexLocks

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 1:40 PM, Simon Riggs si...@2ndquadrant.com wrote:
 Which is the same locking avoidance technique we already use for sync
 rep and for the new group commit patch.

Yep...

 I've been saying for some time that we should use the same technique
 for ProcArray and clog also, so we only need to queue once rather than
 queue three times at end of each transaction.

 I'm not really enthused by the idea of completely rewriting lwlocks
 for this. Seems like specialised code is likely to be best, as well as
 having less collateral damage.

Well, the problem that I have with that is that we're going to end up
with a lot of specialized code, particularly around error recovery.
This code doesn't remove the need for ProcArrayLock to be taken in
exclusive mode, and I don't think there IS any easy way to remove the
need for that to happen sometimes.  So we have to deal with the
possibility that an ERROR might occur while we hold the lock, which
means we have to release the lock and clean up our state.  That means
every place that has a call to LWLockReleaseAll() will now also need
to cleanup ProperlyCleanUpProcArrayLockStuff().  And the next time we
need to add some kind of specialized lock, we'll need to do the same
thing again.  It seems to me that that rapidly gets unmanageable, not
to mention *slow*.  We need some kind of common infrastructure for
releasing locks, and this is an attempt to create such a thing.  I'm
not opposed to doing it some other way, but I think doing each one as
a one-off isn't going to work out very well.

Also, in this particular case, I really do want shared and exclusive
locks on ProcArrayLock to stick around; I just want one additional
operation as well.  It's a lot less duplicated code to do that this
way than it is to write something from scratch.  The FlexLock patch
may look big, but it's mostly renaming and rearranging; it's really
not adding much code.

 With that in mind, should we try to fuse the group commit with the
 procarraylock approach, so we just queue once and get woken when all
 the activities have been handled? If the first woken proc performs the
 actions then wakes people further down the queue it could work quite
 well.

Well, there's too much work there to use the same approach I took
here: we can't very well hold onto the LWLock spinlock while flushing
WAL or waiting for synchronous replication.  Fusing together some
parts of the commit sequence might be the right approach (I don't
know), but honestly my gut feeling is that the first thing we need to
do is go in the opposite direction and break up WALInsertLock into
multiple locks that allow better parallelization of WAL insertion.  Of
course if someone finds a way to fuse the whole commit sequence
together in some way that improves performance, fantastic, but having
tried a lot of things before I came up with this approach, I'm a bit
reluctant to abandon it in favor of an approach that hasn't been coded
or tested yet.  I think we should pursue this approach for now, and we
can always revise it later if someone comes up with something even
better.  As a practical matter, the test results show that with these
patches, ProcArrayLock is NOT a bottleneck at 32 cores, which seems
like enough reason to be pretty happy with it, modulo implementation
details.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] patch for type privileges

2011-11-15 Thread Peter Eisentraut
Here is the patch to implement type privileges that I alluded to
earlier.  To recall, this is mainly so that owners can prevent others
from using their types because that would in some cases prevent owners
from changing the types.  That would effectively be a denial of service.

These are the interfaces that this patch implements:

- GRANT USAGE ON DOMAIN
- GRANT USAGE ON TYPE
- default privileges for types
- analogous REVOKEs
- display privileges in psql \dT+
- privilege checks in various DDL commands (CREATE FUNCTION, CREATE
TABLE, etc.)
- various information schema views adjusted
- has_type_privilege function family

The basics here are mainly informed by the SQL standard.  One thing from
there I did not implement is checking for permission of a type used in
CAST (foo AS type).  This would be doable but relatively complicated,
and in practice someone how is not supposed to be able to use the type
wouldn't be able to create the cast or the underlying cast function
anyway for lack of access to the type.

As elsewhere in the system, the usage of TYPE and DOMAIN is partially
overlapping and partially not.  You can use GRANT ON TYPE on a domain
but not GRANT ON DOMAIN on a type (compare CREATE/DROP).  We only
support one common set of default privileges for types and domains.  I
feel that's enough, but it could be adjusted.

Open items:

- GRANT TO ALL TYPES -- haven't gotten to that yet, but could be added

A reviewer should of course particularly check if there are any holes in
the privilege protection that this patch purports to afford.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 2:01 PM, Josh Berkus j...@agliodbs.com wrote:
 Submit a patch to fix it then.

 It's not fixable.  The ISBN datatype is the equivalent of having an
 SSN datatype that only allows SSNs that have actually been assigned to
 a US citizen.

 Nothing is not fixable.  not fixable without breaking backwards
 compatibility is entirely possible, though.  If fixing it led to two
 different versions of ISN, then that would be a reason to push it to
 PGXN instead of shipping it.

Well, the way to fix it would be to publish a new version of
PostgreSQL every time the international authority that assigns ISBN
prefixes allocates a new one, and for everyone to then update their
PostgreSQL installation every time we do that.  That doesn't, however,
seem very practical.  It just doesn't make sense to define a datatype
where the set of legal values changes over time.  The right place to
put such constraints in the application logic, where it doesn't take a
database upgrade to fix the problem.

 It's not as if ISN is poorly coded. This is a spec issue, which must
 have been debated when we first included it.  No?

I think our standards for inclusion have changed over the years - a
necessary part of project growth, or we would have 500 contrib modules
instead of 50.  The original isbn_issn module was contributed in 1998;
it was replaced by contrib/isn in 2006 because the original module
became obsolete.  I think it's fair to say that if this code were
submitted today it would never get accepted into our tree, and the
submitter would be advised not so much to publish on PGXN instead as
to throw it away entirely and rethink their entire approach to the
problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FlexLocks

2011-11-15 Thread Alvaro Herrera

Excerpts from Robert Haas's message of mar nov 15 17:16:31 -0300 2011:
 On Tue, Nov 15, 2011 at 1:40 PM, Simon Riggs si...@2ndquadrant.com wrote:
  Which is the same locking avoidance technique we already use for sync
  rep and for the new group commit patch.
 
 Yep...
 
  I've been saying for some time that we should use the same technique
  for ProcArray and clog also, so we only need to queue once rather than
  queue three times at end of each transaction.
 
  I'm not really enthused by the idea of completely rewriting lwlocks
  for this. Seems like specialised code is likely to be best, as well as
  having less collateral damage.
 
 Well, the problem that I have with that is that we're going to end up
 with a lot of specialized code, particularly around error recovery.
 This code doesn't remove the need for ProcArrayLock to be taken in
 exclusive mode, and I don't think there IS any easy way to remove the
 need for that to happen sometimes.  So we have to deal with the
 possibility that an ERROR might occur while we hold the lock, which
 means we have to release the lock and clean up our state.  That means
 every place that has a call to LWLockReleaseAll() will now also need
 to cleanup ProperlyCleanUpProcArrayLockStuff().  And the next time we
 need to add some kind of specialized lock, we'll need to do the same
 thing again.  It seems to me that that rapidly gets unmanageable, not
 to mention *slow*.  We need some kind of common infrastructure for
 releasing locks, and this is an attempt to create such a thing.  I'm
 not opposed to doing it some other way, but I think doing each one as
 a one-off isn't going to work out very well.

I agree.  In fact, I would think that we should look into rewriting the
sync rep locking (and group commit) on top of flexlocks, not the other
way around.  As Kevin says nearby it's likely that we could find some
way to rewrite the SLRU (clog and such) locking protocol using these new
things too.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for type privileges

2011-11-15 Thread Thom Brown
On 15 November 2011 20:23, Peter Eisentraut pete...@gmx.net wrote:
 Here is the patch to implement type privileges that I alluded to
 earlier.  To recall, this is mainly so that owners can prevent others
 from using their types because that would in some cases prevent owners
 from changing the types.  That would effectively be a denial of service.

 These are the interfaces that this patch implements:

 - GRANT USAGE ON DOMAIN
 - GRANT USAGE ON TYPE
 - default privileges for types
 - analogous REVOKEs
 - display privileges in psql \dT+
 - privilege checks in various DDL commands (CREATE FUNCTION, CREATE
 TABLE, etc.)
 - various information schema views adjusted
 - has_type_privilege function family

 The basics here are mainly informed by the SQL standard.  One thing from
 there I did not implement is checking for permission of a type used in
 CAST (foo AS type).  This would be doable but relatively complicated,
 and in practice someone how is not supposed to be able to use the type
 wouldn't be able to create the cast or the underlying cast function
 anyway for lack of access to the type.

 As elsewhere in the system, the usage of TYPE and DOMAIN is partially
 overlapping and partially not.  You can use GRANT ON TYPE on a domain
 but not GRANT ON DOMAIN on a type (compare CREATE/DROP).  We only
 support one common set of default privileges for types and domains.  I
 feel that's enough, but it could be adjusted.

 Open items:

 - GRANT TO ALL TYPES -- haven't gotten to that yet, but could be added

 A reviewer should of course particularly check if there are any holes in
 the privilege protection that this patch purports to afford.

Want to try again but with the patch attached? ;)

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FlexLocks

2011-11-15 Thread Kevin Grittner
Alvaro Herrera alvhe...@commandprompt.com wrote:
 
 As Kevin says nearby it's likely that we could find some way to
 rewrite the SLRU (clog and such) locking protocol using these new
 things too.
 
Yeah, I really meant all SLRU, not just clog.  And having seen what
Robert has done here, I'm kinda glad I haven't gotten around to
trying to reduce LW lock contention yet, even though we're getting
dangerously far into the release cycle -- I think it can be done
much better with the, er, flexibility offered by the FlexLock patch.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 Josh Berkus j...@agliodbs.com wrote:
 
 Nothing is not fixable.  not fixable without breaking
 backwards compatibility is entirely possible, though.  If fixing
 it led to two different versions of ISN, then that would be a
 reason to push it to PGXN instead of shipping it.
 
 Well, the way to fix it would be to publish a new version of
 PostgreSQL every time the international authority that assigns
 ISBN prefixes allocates a new one, and for everyone to then update
 their PostgreSQL installation every time we do that.  That
 doesn't, however, seem very practical.
 
Having just taken a closer look at contrib/isn, I'm inclined to
think the current implementation is pretty hopeless.  ISBN seems
common enough and standardized enough that it could perhaps be
included in contrib with the proviso that ranges would only be
validated through pointing to a copy of the XML provided by the
standards body -- it wouldn't be up to PostgreSQL to supply that.
 
The other types in contrib/isn are things I don't know enough about
to have an opinion, but it seems a little odd to shove them all
together.  It would seem more natural to me to have a distinct type
for each and, if needed, figure out how to get a clean union of the
types.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Nov 15, 2011 at 10:29 AM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of mar nov 15 12:16:54 -0300 2011:
 I guess this is a dumb question, but why don't we remove all the dead
 tuples?

 They were deleted but there are transactions with older snapshots.

 Oh.  I was thinking dead meant no longer visible to anyone.   But
 it sounds what we call unremovable here is what we elsewhere call
 recently dead.

Would have to look at the code to be sure, but I think that
nonremovable is meant to count both live tuples and
dead-but-still-visible-to-somebody tuples.

The question that I think needs to be asked is why it would be useful
to track this using the pgstats mechanisms.  By definition, the
difference between this and the live-tuple count is going to be
extremely unstable --- I don't say small, necessarily, but short-lived.
So it's debatable whether it's worth memorializing the count obtained
by the last VACUUM at all.  And doing it through pgstats is an expensive
thing.  We've already had push-back about the size of the stats table
on large (lots-o-tables) databases.  Adding another counter will impose
a performance overhead on everybody, whether they care about this number
or not.

What's more, to the extent that I can think of use-cases for knowing
this number, I think I would want a historical trace of it --- that is,
not only the last VACUUM's result but those of previous VACUUM cycles.
So pgstats seems like it's both expensive and useless for the purpose.

Right now the only good solution is trawling the postmaster log.
Possibly something like pgfouine could track the numbers in a more
useful fashion.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Collect frequency statistics for arrays

2011-11-15 Thread Nathan Boley
 Rebased with head.

FYI, I've added myself as the reviewer for the current commitfest.

Best,
Nathan Boley

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 The thing is, most of the extensions in /contrib have major flaws, or
 they would have been folded in to the core code by now.  CITEXT doesn't
 support multiple collations.  INTARRAY and LTREE have inconsistent
 operators and many bugs.  CUBE lacks documentation.  DBlink is an
 ongoing battle with security holes.  etc.

 Picking out one of those and saying this is crap because of reason X,
 but I'll ignore all the flaws in all these other extensions is
 inconsistent and not liable to produce results.  Now, if you wanted to
 argue that we should kick *all* of the portable extensions out of
 /contrib and onto PGXN, then you'd have a much stronger argument.

There's a larger issue here, which is that a lot of the stuff in contrib
is useful as (a) coding examples for people to look at, and/or (b) test
cases for core-server functionality.  If a module gets kicked out to
PGXN we lose pretty much all the advantages of (b), and some of the
value of (a) because stuff that is in the contrib tree at least gets
maintained when we make server API changes.

So the fact that a module has conceptual flaws that are preventing it
from getting moved into core doesn't really have much to do with whether
we should remove it, IMV.  The big question to be asking about ISN is
whether removing it would result in a loss of test coverage for the core
server; and I believe the answer is yes --- ISTR at least one bug that
we found specifically because ISN tickled it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Royce Ausburn

On 16/11/2011, at 2:05 AM, Yeb Havinga wrote:

 On 2011-10-05 00:45, Royce Ausburn wrote:
 Attached is v2 of my patch that doesn't update CATALOG_VERSION_NO.  I've 
 also fixed the name of an argument to pgstat_report_vacuum which I don't 
 think was particularly good, and I've replace the word tuple with row in 
 some docs I added for consistency.
 
 
 I reviewed your patch. I think it is in good shape, my two main remarks (name 
 of n_unremovable_tup and a remark about documentation at the end of this 
 review) are highly subjective and I wouldn't spend time on it unless other 
 people have the same opinion.
 
 Remarks:
 
 * rules regression test fails because pg_stat_all_tables is changed, 
 pg_stat_sys_tables and pg_stat_user_tables as well, but the new 
 expected/rules.out is not part of the patch.

Doh!  Thank you for spotting this.  Should we decide to continue this patch 
I'll look in to fixing this.

 
 * I'm not sure about the name n_unremovable_tup, since it doesn't convey it 
 is about dead tuples and judging from only the name it might as well include 
 the live tuples. It also doesn't hint that it is a transient condition, which 
 vacuum verbose does with the word 'yet'.
 What about n_locked_dead_tup? - this contains both information that it is 
 about dead tuples, and the lock suggests that once the lock is removed, the 
 dead tuple can be removed.
 

Looks like we have some decent candidates later in the thread.  Should this 
patch survive I'll look at updating it.

 * The number shows correctly in the pg_stat_relation. This is a testcase that 
 gives unremovable dead rows:
 
 I was puzzled for a while that n_unremovable_tup = n_dead_tup doesn't hold, 
 after all, the unremovable tuples are dead as well. Neither the current 
 documentation nor the documentation added by the patch do help in explaining 
 the meaning of n_dead_tup and n_unremovable_tup, which may be clear to 
 seasoned vacuum hackers, but not to me. In both the case of n_dead_tup it 
 would have been nice if the docs mentioned that dead tuples are tuples that 
 are deleted or previous versions of updated tuples, and that only analyze 
 updates n_dead_tup (since vacuum cleans them), in contrast with 
 n_unremovable_tup that gets updated by vacuum. Giving an example of how 
 unremovable dead tuples can be caused would IMHO also help understanding.

Fair enough!  I'll review this as well.

Thanks again!

--Royce
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Royce Ausburn

On 16/11/2011, at 8:04 AM, Tom Lane wrote:

 Robert Haas robertmh...@gmail.com writes:
 On Tue, Nov 15, 2011 at 10:29 AM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of mar nov 15 12:16:54 -0300 2011:
 I guess this is a dumb question, but why don't we remove all the dead
 tuples?
 
 They were deleted but there are transactions with older snapshots.
 
 Oh.  I was thinking dead meant no longer visible to anyone.   But
 it sounds what we call unremovable here is what we elsewhere call
 recently dead.
 
 Would have to look at the code to be sure, but I think that
 nonremovable is meant to count both live tuples and
 dead-but-still-visible-to-somebody tuples.
 
 The question that I think needs to be asked is why it would be useful
 to track this using the pgstats mechanisms.  By definition, the
 difference between this and the live-tuple count is going to be
 extremely unstable --- I don't say small, necessarily, but short-lived.
 So it's debatable whether it's worth memorializing the count obtained
 by the last VACUUM at all.  And doing it through pgstats is an expensive
 thing.  We've already had push-back about the size of the stats table
 on large (lots-o-tables) databases.  Adding another counter will impose
 a performance overhead on everybody, whether they care about this number
 or not.
 
 What's more, to the extent that I can think of use-cases for knowing
 this number, I think I would want a historical trace of it --- that is,
 not only the last VACUUM's result but those of previous VACUUM cycles.
 So pgstats seems like it's both expensive and useless for the purpose.
 
 Right now the only good solution is trawling the postmaster log.
 Possibly something like pgfouine could track the numbers in a more
 useful fashion.


Thanks all for the input.

Tom: 

My first patch attempted to log the number of unremovable tuples in this log, 
but it was done inappropriately -- it was included as part of the 
log_autovacuum_min_duration's output.  You rightly objected to that patch :)

Personally I think some log output, done better, would have been more useful 
for me at the time.  At the time I was trying to diagnose an ineffective vacuum 
and postgres' logs weren't giving me any hints about what was wrong.  I turned 
to the mailing list and got immediate help, but I felt that ideally postgres 
would be logging something to tell me that some 1 day old transactions were 
preventing auto vacuum from doing its job.  Something, anything that I could 
google.  Other novices in my situation probably wouldn't know to look in the 
pg_stats* tables, so in retrospect my patch isn't really achieving my original 
goal.

Should we consider taking a logging approach instead?

--Royce



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FlexLocks

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 3:47 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Alvaro Herrera alvhe...@commandprompt.com wrote:

 As Kevin says nearby it's likely that we could find some way to
 rewrite the SLRU (clog and such) locking protocol using these new
 things too.

 Yeah, I really meant all SLRU, not just clog.  And having seen what
 Robert has done here, I'm kinda glad I haven't gotten around to
 trying to reduce LW lock contention yet, even though we're getting
 dangerously far into the release cycle -- I think it can be done
 much better with the, er, flexibility offered by the FlexLock patch.

I've had a thought that the SLRU machinery could benefit from having
the concept of a pin, which it currently doesn't.  I'm not certain
whether that thought is correct.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-11-15 Thread Andrew Dunstan
On Sat, November 12, 2011 8:56 pm, Andrew Dunstan wrote:


 On 08/26/2011 05:11 PM, Tom Lane wrote:
 Alvaro Herreraalvhe...@commandprompt.com  writes:
 The --section=data --section=indexes proposal seems very reasonable
 to
 me -- more so than --sections='data indexes'.
 +1 ... not only easier to code and less squishily defined, but more like
 the existing precedent for other pg_dump switches, such as --table.




 Here is a patch for that for pg_dump. The sections provided for are
 pre-data, data and post-data, as discussed elsewhere. I still feel that
 anything finer grained should be handled via pg_restore's --use-list
 functionality. I'll provide a patch to do the same switch for pg_restore
 shortly.

 Adding to the commitfest.



Updated version with pg_restore included is attached.

cheers

andrew
*** a/doc/src/sgml/ref/pg_dump.sgml
--- b/doc/src/sgml/ref/pg_dump.sgml
***
*** 116,124  PostgreSQL documentation
 /para
  
 para
! This option is only meaningful for the plain-text format.  For
! the archive formats, you can specify the option when you
! call commandpg_restore/command.
 /para
/listitem
   /varlistentry
--- 116,122 
 /para
  
 para
! 		This option is equivalent to specifying option--section=data/.
 /para
/listitem
   /varlistentry
***
*** 404,413  PostgreSQL documentation
--- 402,431 
 para
  Dump only the object definitions (schema), not data.
 /para
+para
+ 		This option is equivalent to specifying 
+ 		option--section=pre-data --section=post-data/.
+/para
/listitem
   /varlistentry
  
   varlistentry
+ 	   termoption--section=replaceable class=parametersectionname/replaceable/option/term
+ 	   listitem
+ 		 para
+ 		   Only dump the named section. The name can be one of optionpre-data/, optiondata/ 
+and optionpost-data/. 
+ 		   This option can be specified more than once. The default is to dump all sections.
+ 		 /para
+  para
+ 		   Post-data items consist of definitions of indexes, triggers, rules 
+ 		   and constraints other than check constraints. 
+ 		   Pre-data items consist of all other data definition items.
+ 		 /para
+ 	   /listitem
+ 	 /varlistentry
+ 
+  varlistentry
termoption-S replaceable class=parameterusername/replaceable/option/term
termoption--superuser=replaceable class=parameterusername/replaceable/option/term
listitem
*** a/doc/src/sgml/ref/pg_restore.sgml
--- b/doc/src/sgml/ref/pg_restore.sgml
***
*** 93,98 
--- 93,101 
 para
  Restore only the data, not the schema (data definitions).
 /para
+para
+ 		This option is equivalent to specifying option--section=data/.
+/para
/listitem
   /varlistentry
  
***
*** 359,364 
--- 362,371 
  (Do not confuse this with the option--schema/ option, which
  uses the word quoteschema/ in a different meaning.)
 /para
+para
+ 		This option is equivalent to specifying 
+ 		option--section=pre-data --section=post-data/.
+/para
/listitem
   /varlistentry
  
***
*** 505,510 
--- 512,533 
   /varlistentry
  
   varlistentry
+ 	   termoption--section=replaceable class=parametersectionname/replaceable/option/term
+ 	   listitem
+ 		 para
+ 		   Only restore the named section. The name can be one of optionpre-data/, optiondata/ 
+and optionpost-data/. 
+ 		   This option can be specified more than once. The default is to restore all sections.
+ 		 /para
+  para
+ 		   Post-data items consist of definitions of indexes, triggers, rules 
+ 		   and constraints other than check constraints. 
+ 		   Pre-data items consist of all other data definition items.
+ 		 /para
+ 	   /listitem
+ 	 /varlistentry
+ 
+  varlistentry
termoption--use-set-session-authorization/option/term
listitem
 para
*** a/src/bin/pg_dump/pg_backup.h
--- b/src/bin/pg_dump/pg_backup.h
***
*** 69,74  typedef enum _teSection
--- 69,82 
  	SECTION_POST_DATA			/* stuff to be processed after data */
  } teSection;
  
+ typedef enum 
+ {
+ 	DUMP_PRE_DATA = 0x01,
+ 	DUMP_DATA = 0x02,
+ 	DUMP_POST_DATA = 0x04,
+ 	DUMP_UNSECTIONED = 0xff
+ } DumpSections;
+ 
  /*
   *	We may want to have some more user-readable data, but in the mean
   *	time this gives us some abstraction and type checking.
***
*** 111,116  typedef struct _restoreOptions
--- 119,125 
  	int			dropSchema;
  	char	   *filename;
  	int			schemaOnly;
+ 	int dumpSections;
  	int			verbose;
  	int			aclsSkip;
  	int			tocSummary;
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***
*** 665,670  NewRestoreOptions(void)
--- 665,671 
  	/* set any fields that 

Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Peter Geoghegan
On 15 November 2011 21:53, Tom Lane t...@sss.pgh.pa.us wrote:
 There's a larger issue here, which is that a lot of the stuff in contrib
 is useful as (a) coding examples for people to look at, and/or (b) test
 cases for core-server functionality.  If a module gets kicked out to
 PGXN we lose pretty much all the advantages of (b), and some of the
 value of (a) because stuff that is in the contrib tree at least gets
 maintained when we make server API changes.

The isn module is patently broken. It has the potential to damage the
project's reputation if someone chooses to make an example out of it.
I think that that's more important than any additional test coverage
it may bring. There's only a fairly marginal benefit at the expense of
a bad user experience for anyone who should use isn.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 6:44 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 On 15 November 2011 21:53, Tom Lane t...@sss.pgh.pa.us wrote:
 There's a larger issue here, which is that a lot of the stuff in contrib
 is useful as (a) coding examples for people to look at, and/or (b) test
 cases for core-server functionality.  If a module gets kicked out to
 PGXN we lose pretty much all the advantages of (b), and some of the
 value of (a) because stuff that is in the contrib tree at least gets
 maintained when we make server API changes.

 The isn module is patently broken. It has the potential to damage the
 project's reputation if someone chooses to make an example out of it.
 I think that that's more important than any additional test coverage
 it may bring. There's only a fairly marginal benefit at the expense of
 a bad user experience for anyone who should use isn.

I agree.  The argument that this code is useful as example code has
been offered before, but the justification is pretty thin when the
example code is an example of a horrible design that no one should
ever copy.  I don't see that the isn code is doing anything that is so
unique that one of our add-on data types wouldn't be a suitable
(probably far better) template, but if it is, let's add similar
functionality to some other module, or add a new module that does
whatever that interesting thing is, or shove some code in
src/test/examples.  We can't go on complaining one the one hand that
people don't install postgresql-contrib, and then on the other hand
insist on shipping really bad code in contrib.  It's asking a lot for
someone who isn't already heavily involved in the project to
distinguish the wheat from the chaff.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql \ir filename normalization

2011-11-15 Thread Josh Kupershmidt
Hi all,

Commit c7f23494c1103f87bcf1ef7cbfcd626e73edb337 editorialized a bit on
Gurjeet Singh's patch to implement \ir for psql, particularly in
process_file(). Unfortunately, it looks like it broke the common case
of loading a .SQL file in psql's working directory. Consider the
following test case:

mkdir -p /tmp/psql_test/subdir/
mkdir -p /tmp/psql_test/path2/

echo SELECT 'hello 1';  /tmp/psql_test/hello.sql
echo SELECT 'hello from parent';  /tmp/psql_test/hello_parent.sql
echo SELECT 'hello from absolute path'; 
/tmp/psql_test/path2/absolute_path.sql
echo -e SELECT 'hello 2';\n\ir ../hello_parent.sql\n\ir
/tmp/psql_test/path2/absolute_path.sql 
/tmp/psql_test/subdir/hello2.sql
echo -e \ir hello.sql\n\ir subdir/hello2.sql  /tmp/psql_test/load.sql


If you try to load in load.sql from any working directory other than
/tmp/psql_test/ , you should correctly see four output statements.
However, if you:
  cd /tmp/psql_test/  psql test -f load.sql

You will get:

psql:load.sql:1: /hello.sql: No such file or directory
psql:load.sql:2: /subdir/hello2.sql: No such file or directory

Attached is a patch which fixes this, by recycling the bit of
Gurjeet's code which used last_slash. (I have a feeling there's a
simpler way to fix it which avoids the last_slash complications.)

Josh


psql_fix_ir.v2.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing

2011-11-15 Thread Ants Aasma
On Tue, Nov 15, 2011 at 9:02 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 From my reading, it appears that if we get safe code in terms of
 strict aliasing, we might be able to use the restrict keyword to
 get further optimizations which bring it to a net win, but I think
 there is currently lower-hanging fruit than monkeying with these
 compiler options.  I'm letting this go, although I still favor the
 const-ifying which started this discussion, on the grounds of API
 clarity.

Speaking of lower-hanging fruit...

I ran a series of tests to see how different optimization flags
affect performance. I was particularly interested in what effect
link time optimization has. The results are somewhat interesting.

Benchmark machine is my laptop, Intel Core i5 M 540 @ 2.53GHz.
2 cores + hyperthreading for a total of 4 threads. Ubuntu 11.10.
Compiled with GCC 4.6.1-9ubuntu3.

I ran pgbench read only test with scale factor 10, default
options except for shared_buffers = 256MB. The dataset fits fully
in shared buffers.

I tried following configurations:
default: plain old ./configure; make; make install
-O3: what it says on the label
lto: CFLAGS=-O3 -flto This should do some global optimizations
 at link time.
PGO: compiled with CFLAGS=-O3 -fprofile-generate, then ran
 pgbench -T 30 on a scalefactor 100 database (IO bound rw load
 to mix the profile up a bit). Then did
  # sed -i s/-fprofile-generate/-fprofile-use/ src/Makefile.global
 and recompiled and installed.
lto + PGO: same as previous, but with added -flto.

Median tps of 3 5 minute runs at different concurrency levels:

-c  default   -O3  lto   PGO   lto + PGO
==
 1  6753.40  6689.76  6498.37  6614.73  5918.65
 2 11600.87 11659.33 12074.63 12957.81 13353.54
 4 18852.86 18918.32 19008.89 20006.49 20652.93
 8 15232.30 15762.70 14568.06 15880.19 16091.24
16 15693.93 15625.87 16563.91 17088.28 18223.02

Percentage increase from default flags:

-c  default   -O3  lto   PGO   lto + PGO
==
 1  6753.40  -0.94%   -3.78%   -2.05%  -12.36%
 2 11600.87   0.50%4.08%   11.70%   15.11%
 4 18852.86   0.35%0.83%6.12%9.55%
 8 15232.30   3.48%   -4.36%4.25%5.64%
16 15693.93  -0.43%5.54%8.88%   16.12%

Concurrency 8 results should probably be ignored - variance was huge,
definitely more than the differences. For other results, variance was
~1%.

I don't know what to make of the single client results, why they seem
to be going in the opposite direction of all other results. Other than
that both profile guided optimization and link time optimization give
a pretty respectable boost. If anyone can suggest some more diverse
workloads to test, I could try to see if the PGO results persist when
profiling and benchmark loads differ more. These results suggest that
giving the compiler information about hot and cold paths results in a
significant improvement in generated code quality.

--
Ants Aasma

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Group Commit

2011-11-15 Thread Jeff Janes
On Mon, Nov 14, 2011 at 2:08 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Enclosed patch implements Group Commit and also powersave mode for WALWriter.

 XLogFlush() waits for WALWriter to run XLogBackgroundFlush(), which
 flushes WAL and then wakes waiters. Uses same concepts and similar
 code to sync rep.

 Purpose is to provide consistent WAL writes, even when WALInsertLock
 contended. Currently no off option, thinking is that the overhead of
 doing this is relatively low and so it can be always on - exactly as
 it is for sync rep.

 WALWriter now has variable wakeups, so wal_writer_delay is removed.
 Commit_delay and Commit_siblings are now superfluous and are also removed.

 Works, but needs discussion in some areas, docs and possibly tuning
 first, so this is more of a quicky than a slow, comfortable patch.

When I apply this to HEAD and run make check, it freezes at:
test tablespace   ...

The wal writer process is running at 100% CPU.  The only thing it is
doing that is visible though strace is checking that the postmaster is
alive.

I see the same behavior on two different Linux boxes, 32 bit and 64
bit.  I've tried to do some debugging, but haven't made much head-way.
 Does anyone else see this?

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Peter Geoghegan
On 15 November 2011 23:57, Robert Haas robertmh...@gmail.com wrote:
 We can't go on complaining one the one hand that
 people don't install postgresql-contrib, and then on the other hand
 insist on shipping really bad code in contrib.  It's asking a lot for
 someone who isn't already heavily involved in the project to
 distinguish the wheat from the chaff.

ISTM that any sensible sanitisation of data guards against Murphy, not
Machiavelli. Exactly what sort of error is it imagined will be
prevented by enforcing that ISBN prefixes are valid? Transcription
and transposition errors are already guarded against very effectively
by the check digit.

If we can't put isn out of its misery, a sensible compromise would be
to rip out the prefix enforcement feature so that, for example, ISBN13
behaved exactly the same as EAN13. That would at least result in
predictable behaviour. The strike that separates each part of the
ISBN would be lost, but it is actually not visible on large ISBN
websites, presumably because it's a tar-pit of a problem.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Royce Ausburn
 
 
 Personally I think some log output, done better, would have been more useful 
 for me at the time.  At the time I was trying to diagnose an ineffective 
 vacuum and postgres' logs weren't giving me any hints about what was wrong.  
 I turned to the mailing list and got immediate help, but I felt that ideally 
 postgres would be logging something to tell me that some 1 day old 
 transactions were preventing auto vacuum from doing its job.  Something, 
 anything that I could google.  Other novices in my situation probably 
 wouldn't know to look in the pg_stats* tables, so in retrospect my patch 
 isn't really achieving my original goal.
 
 Should we consider taking a logging approach instead?

Dopey suggestion: 

Instead of logging around vacuum and auto_vacuum, perhaps log transactions that 
are open for longer than some (perhaps configurable) time?  The default might 
be pretty large, say 6 hours.  Are there common use cases for txs that run for 
longer than 6 hours?  Seeing a message such as:

WARNING: Transaction X has been open more than Y.  This tx may be holding 
locks preventing other txs from operating and may prevent vacuum from cleaning 
up deleted rows.

Would give a pretty clear indication of a problem :)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Joshua Berkus
All,

 I agree.  The argument that this code is useful as example code has
 been offered before, but the justification is pretty thin when the
 example code is an example of a horrible design that no one should
 ever copy.

People are already using ISN (or at least ISBN) in production.  It's been 
around for 12 years.  So any step we take with contrib/ISN needs to take that 
into account -- just as we have with Tsearch2 and XML2.

One can certainly argue that some of the stuff in /contrib would be better on 
PGXN.  But in that case, it's not limited to ISN; there are several modules of 
insufficient quality (including intarray and ltree) or legacy nature which 
ought to be pushed out.  Probably most of them.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Group Commit

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 7:18 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 When I apply this to HEAD and run make check, it freezes at:
 test tablespace               ...
[...]
  Does anyone else see this?

It hangs for me, too, just slightly later:

== running regression test queries==
test tablespace   ... ok
parallel group (18 tests):  name txid oid float4 text int2 int4 int8
char varchar uuid float8 money boolean bit enum numeric

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-11-15 Thread Joshua Berkus


  Here is a patch for that for pg_dump. The sections provided for are
  pre-data, data and post-data, as discussed elsewhere. I still feel that
  anything finer grained should be handled via pg_restore's --use-list
  functionality. I'll provide a patch to do the same switch for pg_restore
  shortly.
 
  Adding to the commitfest.
 
 
 
 Updated version with pg_restore included is attached.

Functionality review:

I have tested the backported version of this patch using a 500GB production 
database with over 200 objects and it worked as specified. 

This functionality is extremely useful for the a variety of selective copying 
of databases, including creating shrunken test instances, ad-hoc parallel dump, 
differently indexed copies, and sanitizing copies of sensitive data, and even 
bringing the database up for usage while the indexes are still building.

Note that this feature has the odd effect that some constraints are loaded at 
the same time as the tables and some are loaded with the post-data.  This is 
consistent with how text-mode pg_dump has always worked, but will seem odd to 
the user.  This also raises the possibility of a future pg_dump/pg_restore 
optimization.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 7:59 PM, Royce Ausburn royce...@inomial.com wrote:
 Personally I think some log output, done better, would have been more useful 
 for me at the time.  At the time I was trying to diagnose an ineffective 
 vacuum and postgres' logs weren't giving me any hints about what was wrong.  
 I turned to the mailing list and got immediate help, but I felt that ideally 
 postgres would be logging something to tell me that some 1 day old 
 transactions were preventing auto vacuum from doing its job.  Something, 
 anything that I could google.  Other novices in my situation probably 
 wouldn't know to look in the pg_stats* tables, so in retrospect my patch 
 isn't really achieving my original goal.

 Should we consider taking a logging approach instead?

 Dopey suggestion:

 Instead of logging around vacuum and auto_vacuum, perhaps log transactions 
 that are open for longer than some (perhaps configurable) time?  The default 
 might be pretty large, say 6 hours.  Are there common use cases for txs that 
 run for longer than 6 hours?  Seeing a message such as:

 WARNING: Transaction X has been open more than Y.  This tx may be holding 
 locks preventing other txs from operating and may prevent vacuum from 
 cleaning up deleted rows.

 Would give a pretty clear indication of a problem :)

Well, you could that much just by periodically querying pg_stat_activity.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Peter Geoghegan
On 16 November 2011 01:09, Joshua Berkus j...@agliodbs.com wrote:
 People are already using ISN (or at least ISBN) in production.  It's been 
 around for 12 years.

contrib/isn has been around since 2006. The argument some unknowable
number of people are using this feature in production could equally
well apply to anything that we might consider deprecating.

I am not arguing for putting isn on PGXN. I'm arguing for actively
warning people against using it, because it is harmful. Any serious
use of the ISBN datatypes can be expected to break unpredictably one
day, and the only thing that someone can do in that situation is to
write their own patch to contrib/isn. They'd then have to wait for
that patch to be accepted if they didn't want to fork, which is a very
bad situation indeed. This already happened once.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Unremovable tuple monitoring

2011-11-15 Thread Royce Ausburn

On 16/11/2011, at 12:26 PM, Robert Haas wrote:

 On Tue, Nov 15, 2011 at 7:59 PM, Royce Ausburn royce...@inomial.com wrote:
 Personally I think some log output, done better, would have been more 
 useful for me at the time.  At the time I was trying to diagnose an 
 ineffective vacuum and postgres' logs weren't giving me any hints about 
 what was wrong.  I turned to the mailing list and got immediate help, but I 
 felt that ideally postgres would be logging something to tell me that some 
 1 day old transactions were preventing auto vacuum from doing its job.  
 Something, anything that I could google.  Other novices in my situation 
 probably wouldn't know to look in the pg_stats* tables, so in retrospect my 
 patch isn't really achieving my original goal.
 
 Should we consider taking a logging approach instead?
 
 Dopey suggestion:
 
 Instead of logging around vacuum and auto_vacuum, perhaps log transactions 
 that are open for longer than some (perhaps configurable) time?  The default 
 might be pretty large, say 6 hours.  Are there common use cases for txs that 
 run for longer than 6 hours?  Seeing a message such as:
 
 WARNING: Transaction X has been open more than Y.  This tx may be holding 
 locks preventing other txs from operating and may prevent vacuum from 
 cleaning up deleted rows.
 
 Would give a pretty clear indication of a problem :)
 
 Well, you could that much just by periodically querying pg_stat_activity.

Fair enough -- someone knowledgable could set that up if they wanted.  My goal 
was mostly to have something helpful in the logs.  If that's not something 
postgres wants/needs Ill drop it =)





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-15 Thread Greg Smith
Well, this discussion veering off into ISN has certainly validated my 
gut feel that I should touch only the minimum number of things that 
kills my pain points, rather than trying any more ambitious 
restructuring.  I hope that packaged extensions become so popular that 
some serious cutting can happen to contrib, especially the data type 
additions.  If something as big as PostGIS can live happily as an 
external project, surely most of these can too.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: pg_backup - using a conditional DROP

2011-11-15 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Nov 15, 2011 at 10:36 AM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 I'm wondering why we need an option for this, though.  Assuming we
 make DROP IF EXISTS work anywhere that it doesn't already, why not
 just always produce that rather than straight DROP?  It seems
 categorically better.

 I think there's a fuzzy idea that we should try to keep our dumps
 vaguely compatible with other systems.  If we add DROP IF EXISTS
 unconditionally, there would be no way to make them run elsewhere.

 Well, except in --clean mode, we don't emit DROP commands at all.  And
 since --clean doesn't even work well on PostgreSQL, I can't get too
 excited about whether it will work everywhere else.

What I find lacking here is a clear explication of what the use-case is;
that is, this proposal seems like a solution in search of a problem.

The default case for pg_dump is that you're going to load a bunch of
objects into an empty database.  You don't need any DROP commands,
and this always works fine (or if it doesn't, there's a clear bug to
be fixed in pg_dump).

The --clean switch seems to be targeted at the case that you're trying
to replace the contents of a database that has the same schema as the
one you dumped from.  The DROPs will work, more or less, barring nasty
cases such as circular dependencies.  (Maybe it will work even then,
but I don't know how carefully we've tested such cases.)

Now, --clean using DROP IF EXISTS would be targeted at, um, what case?
I guess the idea is to be able to load into a database that sort of
mostly shares the same schema as the one you dumped from, only it's not
the same (if it were the same, you'd not need IF EXISTS).  The problem
with this is that if the schema isn't the same, it probably hasn't got
the same dependencies, so there's rather little likelihood that pg_dump
will correctly guess what order to issue the DROPs in ... and it
certainly won't know about dependencies to the target objects from other
objects that are in the destination database but weren't in the source.

Possibly you could get around that by ignoring errors; but if you're
willing to ignore errors, you don't need the IF EXISTS qualifiers.

So before buying into this proposal, I want to see a clear demonstration
of a common use-case where it actually does some good.  I'm not
convinced that there is one.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISN was: Core Extensions relocation

2011-11-15 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 If we can't put isn out of its misery, a sensible compromise would be
 to rip out the prefix enforcement feature so that, for example, ISBN13
 behaved exactly the same as EAN13.

That might be a reasonable compromise.  Certainly the check-digit
calculation is much more useful for validity checking than the prefix
checking.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \ir filename normalization

2011-11-15 Thread Robert Haas
On Tue, Nov 15, 2011 at 6:54 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 Commit c7f23494c1103f87bcf1ef7cbfcd626e73edb337 editorialized a bit on
 Gurjeet Singh's patch to implement \ir for psql, particularly in
 process_file(). Unfortunately, it looks like it broke the common case
 of loading a .SQL file in psql's working directory. Consider the
 following test case:

 mkdir -p /tmp/psql_test/subdir/
 mkdir -p /tmp/psql_test/path2/

 echo SELECT 'hello 1';  /tmp/psql_test/hello.sql
 echo SELECT 'hello from parent';  /tmp/psql_test/hello_parent.sql
 echo SELECT 'hello from absolute path'; 
 /tmp/psql_test/path2/absolute_path.sql
 echo -e SELECT 'hello 2';\n\ir ../hello_parent.sql\n\ir
 /tmp/psql_test/path2/absolute_path.sql 
 /tmp/psql_test/subdir/hello2.sql
 echo -e \ir hello.sql\n\ir subdir/hello2.sql  /tmp/psql_test/load.sql


 If you try to load in load.sql from any working directory other than
 /tmp/psql_test/ , you should correctly see four output statements.
 However, if you:
  cd /tmp/psql_test/  psql test -f load.sql

 You will get:

 psql:load.sql:1: /hello.sql: No such file or directory
 psql:load.sql:2: /subdir/hello2.sql: No such file or directory

 Attached is a patch which fixes this, by recycling the bit of
 Gurjeet's code which used last_slash. (I have a feeling there's a
 simpler way to fix it which avoids the last_slash complications.)

Argh.  The root of the problem here seems to be that
join_path_components() feels entitled to arbitrarily insert a pathname
separator at the front of the output string even if its first input
didn't begin with one originally.  Lame!

While looking for other places where this behavior might cause a
problem, I noticed something else that doesn't seem right.  On
REL9_1_STABLE, if I initdb and then change the first all on the
local line to @foo, I get this:

LOG:  could not open secondary authentication file @foo as
/Users/rhaas/pgsql/x/foo: No such file or directory

...and then the server starts up.  But on master, I get:

LOG:  could not open secondary authentication file @foo as
/Users/rhaas/pgsql/y/foo: No such file or directory
LOG:  end-of-line before database specification
CONTEXT:  line 84 of configuration file /Users/rhaas/pgsql/y/pg_hba.conf
LOG:  invalid connection type all
CONTEXT:  line 85 of configuration file /Users/rhaas/pgsql/y/pg_hba.conf
FATAL:  could not load pg_hba.conf

...which doesn't look right.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Configuration include directory

2011-11-15 Thread Greg Smith
Two years ago Magnus submitted a patch to parse all the configuration 
files in a directory.  After some discussion I tried to summarize what I 
thought the most popular ideas were for moving that forward:


http://archives.postgresql.org/pgsql-hackers/2009-10/msg01452.php
http://archives.postgresql.org/pgsql-hackers/2009-10/msg01631.php

And I've now cleared the bit rot and updated that patch to do what was 
discussed.  Main feature set:


-Called by specifying includedir directory.  No changes to the 
shipped postgresql.conf yet.

-Takes an input directory name
-If it's not an absolute path, considers that relative to the -D option 
(if specified) or PGDATA, the same logic used to locate the 
postgresql.conf (unless a full path to it is used)
-Considers all names in that directory that end with *.conf  [Discussion 
concluded more flexibility here would be of limited value relative to 
how it complicates the implementation]

-Loops over the files found in sorted order by name

The idea here is that it will be easier to write tools that customize 
the database configuration if they can just write a new file out, rather 
than needing to parse the whole configuration file first.  This would 
allow Apache style configuration directories.  My end goal here is to 
see all of the work initdb does pushed into a new file included by this 
scheme.  People could then expect a functionally empty postgresql.conf 
except for an includedir, and the customization would go into 00initdb.  
Getting some agreement on that is not necessary for this feature to go 
in though; one step at a time.


Here's an example showing this working, including rejection of a 
spurious editor backup file in the directory:


$ cat $PGDATA/postgresql.conf | grep ^work_mem
$ tail -n 1 $PGDATA/postgresql.conf
includedir='conf.d'
$ ls $PGDATA/conf.d
00config.conf  00config.conf~
$ cat $PGDATA/conf.d/00config.conf
work_mem=4MB
$ cat $PGDATA/conf.d/00config.conf~
work_mem=2MB
$ psql -c select name,setting,sourcefile,sourceline from pg_settings 
where name='work_mem'
   name   | setting |  
sourcefile   | sourceline

--+-+---+
 work_mem | 4096| 
/home/gsmith/pgwork/data/confdir/conf.d/00config.conf |  1


No docs in here yet.  There's one ugly bit of code here I was hoping 
(but failed) to avoid.  Right now the server doesn't actually save the 
configuration directory anywhere.  Once you leave the initial read in 
SelectConfigFiles, that information is gone, and you only have the 
configfile.  I decided to make that configdir into a global value.  
Seemed easier than trying to pass it around, given how many SIGHUP paths 
could lead to this new code.


I can see some potential confusion here in one case.  Let's say someone 
specifies a full path to their postgresql.conf file.  They might assume 
that the includedir was relative to the directory that file is in.  
Let's say configfile is /etc/sysconfig/pgsql/postgresql.conf ; a user 
might think that includedir conf.d from there would reference 
/etc/sysconfig/pgsql/conf.d instead of the $PGDATA/conf.d you actually 
get.  Wavering on how to handle that is one reason I didn't try 
documenting this yet, the decision I made here may not actually be the 
right one.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us

diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l
index a094c7a..25e1a07 100644
*** a/src/backend/utils/misc/guc-file.l
--- b/src/backend/utils/misc/guc-file.l
*** ParseConfigFp(FILE *fp, const char *conf
*** 512,518 
  		}
  
  		/* OK, process the option name and value */
! 		if (guc_name_compare(opt_name, include) == 0)
  		{
  			/*
  			 * An include directive isn't a variable and should be processed
--- 512,535 
  		}
  
  		/* OK, process the option name and value */
! 		if (guc_name_compare(opt_name, includedir) == 0)
! 		{
! 			/*
! 			 * An includedir directive isn't a variable and should be processed
! 			 * immediately.
! 			 */
! 			unsigned int save_ConfigFileLineno = ConfigFileLineno;
! 
! 			if (!ParseConfigDirectory(opt_value, config_file,
!  depth + 1, elevel,
!  head_p, tail_p))
! OK = false;
! 			yy_switch_to_buffer(lex_buffer);
! 			ConfigFileLineno = save_ConfigFileLineno;
! 			pfree(opt_name);
! 			pfree(opt_value);
! 		}
! 		else if (guc_name_compare(opt_name, include) == 0)
  		{
  			/*
  			 * An include directive isn't a variable and should be processed
*** ParseConfigFp(FILE *fp, const char *conf
*** 599,604 
--- 616,727 
  	return OK;
  }
  
+ static int
+ comparestr(const void *a, const void *b)
+ {
+ 	return strcmp(*(char **) a, *(char **) b);
+ }
+ 
+ /*
+  * Read and parse all config files in a subdirectory in alphabetical order
+  */
+ bool
+ 

[HACKERS] includeifexists in configuration file

2011-11-15 Thread Greg Smith
By recent popular request in the ongoing discussion saga around merging 
the recovery.conf, I've added an includeifexists directive to the 
postgresql.conf in the attached patch.  Demo:


$ tail -n 1 $PGDATA/postgresql.conf
include 'missing.conf'
$ pg_ctl start -l $PGLOG
server starting
$ tail -n 2 $PGLOG
LOG:  could not open configuration file 
/home/gsmith/pgwork/data/include-exists/missing.conf: No such file or 
directory
FATAL:  configuration file 
/home/gsmith/pgwork/data/include-exists/postgresql.conf contains errors

$ vi $PGDATA/postgresql.conf
$ tail -n 1 $PGDATA/postgresql.conf
includeifexists 'missing.conf'
$ pg_ctl start -l $PGLOG
server starting
$ tail -n 3 $PGLOG
LOG:  database system was shut down at 2011-11-16 00:17:36 EST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

There might be a cleaner way to write this that eliminates some of the 
cut and paste duplication between this and the regular include 
directive.  I'm short on clever but full of brute force tonight.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d1e628f..da45ac1 100644
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
*** include 'filename'
*** 91,96 
--- 91,107 
  
 para
  indexterm
+  primaryliteralincludeifexists//primary
+  secondaryin configuration file/secondary
+ /indexterm
+ Use the same approach as the literalinclude/ directive, continuing
+ normally if the file does not exist.  A regular literalinclude/
+ will stop with an error if the referenced file is missing, while
+ literalincludeifexists/ does not.
+/para
+ 
+para
+ indexterm
   primarySIGHUP/primary
  /indexterm
  The configuration file is reread whenever the main server process receives a
diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l
index a094c7a..6f26421 100644
*** a/src/backend/utils/misc/guc-file.l
--- b/src/backend/utils/misc/guc-file.l
*** ProcessConfigFile(GucContext context)
*** 129,135 
  	/* Parse the file into a list of option names and values */
  	head = tail = NULL;
  
! 	if (!ParseConfigFile(ConfigFileName, NULL, 0, elevel, head, tail))
  	{
  		/* Syntax error(s) detected in the file, so bail out */
  		error = true;
--- 129,135 
  	/* Parse the file into a list of option names and values */
  	head = tail = NULL;
  
! 	if (!ParseConfigFile(ConfigFileName, NULL, true, 0, elevel, head, tail))
  	{
  		/* Syntax error(s) detected in the file, so bail out */
  		error = true;
*** ProcessConfigFile(GucContext context)
*** 363,369 
   * and absolute-ifying the path name if necessary.
   */
  bool
! ParseConfigFile(const char *config_file, const char *calling_file,
  int depth, int elevel,
  ConfigVariable **head_p,
  ConfigVariable **tail_p)
--- 363,369 
   * and absolute-ifying the path name if necessary.
   */
  bool
! ParseConfigFile(const char *config_file, const char *calling_file, bool strict,
  int depth, int elevel,
  ConfigVariable **head_p,
  ConfigVariable **tail_p)
*** ParseConfigFile(const char *config_file,
*** 414,424 
  	fp = AllocateFile(config_file, r);
  	if (!fp)
  	{
! 		ereport(elevel,
! (errcode_for_file_access(),
!  errmsg(could not open configuration file \%s\: %m,
! 		config_file)));
! 		return false;
  	}
  
  	OK = ParseConfigFp(fp, config_file, depth, elevel, head_p, tail_p);
--- 414,430 
  	fp = AllocateFile(config_file, r);
  	if (!fp)
  	{
! 		if (strict)
! 		{
! 			ereport(elevel,
! 	(errcode_for_file_access(),
! 	 errmsg(could not open configuration file \%s\: %m,
! 			config_file)));
! 			return false;
! 		}
! 
! 		/* Silently skip missing files if not asked to be strict */
! 		return OK;
  	}
  
  	OK = ParseConfigFp(fp, config_file, depth, elevel, head_p, tail_p);
*** ParseConfigFp(FILE *fp, const char *conf
*** 512,518 
  		}
  
  		/* OK, process the option name and value */
! 		if (guc_name_compare(opt_name, include) == 0)
  		{
  			/*
  			 * An include directive isn't a variable and should be processed
--- 518,541 
  		}
  
  		/* OK, process the option name and value */
! 		if (guc_name_compare(opt_name, includeifexists) == 0)
! 		{
! 			/*
! 			 * An includeifexists directive isn't a variable and should be
! 			 * processed immediately.
! 			 */
! 			unsigned int save_ConfigFileLineno = ConfigFileLineno;
! 
! 			if (!ParseConfigFile(opt_value, config_file, false,
!  depth + 1, elevel,
!  head_p, tail_p))
! OK = false;
! 			yy_switch_to_buffer(lex_buffer);
! 			ConfigFileLineno = save_ConfigFileLineno;
! 			pfree(opt_name);
! 			pfree(opt_value);
! 		}
! 		else if (guc_name_compare(opt_name, include) == 0)
  		{
  			/*
  			 

Re: [HACKERS] FlexLocks

2011-11-15 Thread Dan Ports
On Tue, Nov 15, 2011 at 10:55:49AM -0600, Kevin Grittner wrote:
 And I would be
 surprised if some creative thinking didn't yield a far better FL
 scheme for SSI than we can manage with existing LW locks.

One place I could see it being useful is for
SerializableFinishedListLock, which protects the queue of committed
sxacts that can't yet be cleaned up. When committing a transaction, it
gets added to the list, and then scans the queue to find and clean up
any sxacts that are no longer needed. If there's contention, we don't
need multiple backends doing that scan; it's enough for one to complete
it on everybody's behalf.

I haven't thought it through, but it may also help with the other
contention bottleneck on that lock: that every transaction needs to add
itself to the cleanup list when it commits.


Mostly unrelatedly, the other thing that's looking like it would be really
useful would be some support for atomic integer operations. This would
be useful for some SSI things like writableSxactCount, and some things
elsewhere like the strong lock count in the regular lock manager.
I've been toying with the idea of creating an AtomicInteger type with a
few operations like increment-and-get, compare-and-set, swap, etc. This
would be implemented using the appropriate hardware operations on
platforms that support them (x86_64, perhaps others) and fall back on a
spinlock implementation on other platforms. I'll probably give it a try
and see what it looks like, but if anyone has any thoughts, let me know.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch : Allow toast tables to be moved to a different tablespace

2011-11-15 Thread Jaime Casanova
On Tue, Nov 15, 2011 at 11:08 AM, Julien Tachoires jul...@gmail.com wrote:

 Maybe I'd missed something, but the normal case is :
 ALTER TABLE ... SET TABLESPACE = moves Table + moves associated TOAST Table
 ALTER TABLE ... SET TABLE TABLESPACE = moves Table  keeps associated
 TOAST Table at its place
 ALTER TABLE ... SET TOAST TABLESPACE = keeps Table at its place 
 moves associated TOAST Table


it has docs, and pg_dump support which is good.

but i found a few problems with the behaviour:
1) it accepts the sintax ALTER INDEX ... SET TOAST TABLESPACE ...;
which does nothing
2) after CLUSTER the index of the toast table gets moved to the same
tablespace as the main table
3) after ALTER TABLE ... ALTER ... TYPE ...; the toast table gets
moved to the same tablespace as the main table

now, if we are now supporting this variants
ALTER TABLE SET TABLE TABLESPACE
ALTER TABLE SET TOAST TABLESPACE

why not also support ALTER TABLE SET INDEX TABLESPACE which should
have the same behaviour as ALTER INDEX SET TABLESPACE... just an idea,
and of course not necessary for this patch

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-11-15 Thread Robert Haas
On Mon, Nov 7, 2011 at 2:14 PM, Josh Berkus j...@agliodbs.com wrote:
 2. standby_mode becomes an ENUM: off,standby,pitr.  It can be reset on
 server reload or through pg_ctl promote

I'm a little bit confused by the way we're dragging standby_mode into
this conversation.  If you're using pg_standby, you can set
standby_mode=off and still have a standby.  If you're using a simple
recovery command that just copies files, you need to set
standby_mode=on if you don't want the standby to exit recovery and
promote.  But I think of standby_mode as meaning should we use the
internal standby loop rather than depending on an external tool?
rather than should we become a standby?.

Maybe I'm confused.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] removing =(text, text) in 9.2

2011-11-15 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Thanks for the review.  New version attached.

This looks sane to me too (though I only read the patch and didn't
actually test upgrading).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Displaying accumulated autovacuum cost

2011-11-15 Thread Greg Smith

On 10/05/2011 03:02 AM, Greg Smith wrote:
Presumably you meant to ask if this makes sense to show when cost 
accounting isn't enabled, because the code doesn't do that right now.  
No cost accounting, no buffer usage/write rate data as this was 
submitted.


This is done in the attached update.  I just made the page accounting 
happen all the time, regardless of whether the costs were being 
accumulated.  Added a read rate too, which is how fast reads happened 
from the OS cache to shared_buffers.  Simple test case generates a 600MB 
pgbench_accounts database and wipes out enough to take a while to clean 
up; it needs log_autovacuum_min_duration  = 0 and then:


$ createdb pgbench
$ pgbench -i -s 10 pgbench
$ psql -d pgbench -c delete from pgbench_accounts where aid20

LOG:  automatic vacuum of table pgbench.public.pgbench_accounts: index 
scans: 1

pages: 0 removed, 16394 remain
tuples: 19 removed, 640011 remain
buffer usage: 13742 hits, 2708 misses, 1058 dirtied
avg read rate: 3.067 MiB/s, avg write rate: 1.198 MiB/s
system usage: CPU 0.05s/0.61u sec elapsed 6.89 sec

Now that you mention it, people who do a manual, full-speed VACUUM 
would certainly appreciate some feedback on the rate it ran at.


This is more of a pain because this whole code path is only active when 
IsAutoVacuumWorkerProcess.  I have some larger refactoring in mind to 
perhaps make that more feasible.  I didn't want to hold this update 
aiming at the more valuable autovac case for that though, can always 
layer it on later.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index f42504c..6ef85dd 100644
*** a/src/backend/commands/vacuum.c
--- b/src/backend/commands/vacuum.c
*** vacuum(VacuumStmt *vacstmt, Oid relid, b
*** 214,219 
--- 214,222 
  
  		VacuumCostActive = (VacuumCostDelay  0);
  		VacuumCostBalance = 0;
+ 		VacuumPageHit = 0;
+ 		VacuumPageMiss = 0;
+ 		VacuumPageDirty = 0;
  
  		/*
  		 * Loop to process each selected relation.
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 38deddc..c59fceb 100644
*** a/src/backend/commands/vacuumlazy.c
--- b/src/backend/commands/vacuumlazy.c
*** lazy_vacuum_rel(Relation onerel, VacuumS
*** 154,160 
  	int			nindexes;
  	BlockNumber possibly_freeable;
  	PGRUsage	ru0;
! 	TimestampTz starttime = 0;
  	bool		scan_all;
  	TransactionId freezeTableLimit;
  	BlockNumber new_rel_pages;
--- 154,163 
  	int			nindexes;
  	BlockNumber possibly_freeable;
  	PGRUsage	ru0;
! 	TimestampTz starttime = 0, endtime;
!  	long		secs;
!  	int			usecs;
!  	double		read_rate, write_rate;
  	bool		scan_all;
  	TransactionId freezeTableLimit;
  	BlockNumber new_rel_pages;
*** lazy_vacuum_rel(Relation onerel, VacuumS
*** 166,173 
  	if (IsAutoVacuumWorkerProcess()  Log_autovacuum_min_duration = 0)
  	{
  		pg_rusage_init(ru0);
! 		if (Log_autovacuum_min_duration  0)
! 			starttime = GetCurrentTimestamp();
  	}
  
  	if (vacstmt-options  VACOPT_VERBOSE)
--- 169,175 
  	if (IsAutoVacuumWorkerProcess()  Log_autovacuum_min_duration = 0)
  	{
  		pg_rusage_init(ru0);
! 		starttime = GetCurrentTimestamp();
  	}
  
  	if (vacstmt-options  VACOPT_VERBOSE)
*** lazy_vacuum_rel(Relation onerel, VacuumS
*** 262,274 
  	/* and log the action if appropriate */
  	if (IsAutoVacuumWorkerProcess()  Log_autovacuum_min_duration = 0)
  	{
  		if (Log_autovacuum_min_duration == 0 ||
! 			TimestampDifferenceExceeds(starttime, GetCurrentTimestamp(),
  	   Log_autovacuum_min_duration))
  			ereport(LOG,
  	(errmsg(automatic vacuum of table \%s.%s.%s\: index scans: %d\n
  			pages: %d removed, %d remain\n
  			tuples: %.0f removed, %.0f remain\n
  			system usage: %s,
  			get_database_name(MyDatabaseId),
  			get_namespace_name(RelationGetNamespace(onerel)),
--- 264,290 
  	/* and log the action if appropriate */
  	if (IsAutoVacuumWorkerProcess()  Log_autovacuum_min_duration = 0)
  	{
+ 		endtime = GetCurrentTimestamp();
  		if (Log_autovacuum_min_duration == 0 ||
! 			TimestampDifferenceExceeds(starttime, endtime,
  	   Log_autovacuum_min_duration))
+ 		{
+ 			TimestampDifference(starttime, endtime, secs, usecs);
+ 			read_rate = 0;
+ 			write_rate = 0;
+ 			if ((secs  0) || (usecs  0))
+ 			{
+ read_rate = (double) BLCKSZ * VacuumPageMiss / (1024 * 1024) /
+ 	(secs + usecs / 100.0);
+ write_rate = (double) BLCKSZ * VacuumPageDirty / (1024 * 1024) /
+  	(secs + usecs / 100.0);
+ 			}
  			ereport(LOG,
  	(errmsg(automatic vacuum of table \%s.%s.%s\: index scans: %d\n
  			pages: %d removed, %d remain\n
  			tuples: %.0f removed, %.0f remain\n
+ 			buffer usage: %d hits, %d misses, %d dirtied\n
+ 			avg read rate: %.3f MiB/s, avg write 

[HACKERS] Client library cross-compiling: Win32, Win64, MacOSX. Possible?

2011-11-15 Thread Pavel Golub
Hello.

Are there any howto's or articles about building client access library
(libpq) for several target OSes, e.g. Win32, Win64, MacOS in the same
MinGW environment?

And is it possible at all? I know that there is MinGW-w64 to produce
Win64 binaries, but I want to have one farm for all.

If not, is there any opportunity to have needed binaries from some
postgresql build farms?



-- 
With best wishes,
 Pavel  mailto:pa...@gf.microolap.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers