Re: [HACKERS] Bug in ALTER COLUMN SET DATA TYPE ?

2012-11-05 Thread Pavan Deolasee
On Mon, Nov 5, 2012 at 4:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:



 It's clear that we need to pass down the information that this action is
 coming from re-creation of a check constraint, but I think the above
 proposal for how to do it is pretty wrong-headed.


Yeah, I only meant that we need to teach ATExecAddConstraint that its being
called from the specific pass of ALTER TABLE and wanted to get agreement on
that. I hadn't thought about any particular implementation. So your
proposal below looks absolutely fine and clean.



 I'm inclined to think the cleanest solution is to add another value of
 enum AlterTableType, perhaps AT_ReAddConstraint, to signal that we're
 executing a re-add; and then add another bool parameter to
 ATExecAddConstraint to tell the latter not to complain if child tables
 exist.  This is more in line with pre-existing coding choices such as
 the use of AT_AddConstraintRecurse.


Please see attached patch which does what you suggested above. May be it
needs a little more commentary to record why we made this specific change.
Please let me know if you think so and want me to do that.

Thanks,
Pavan


alter-type-readd-constraint.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous commit not... synchronous?

2012-11-05 Thread Peter Eisentraut
On 10/31/12 9:39 PM, Peter van Hardenberg wrote:
 This was rather surprising - my synchronous commit was... not cancelled.
 Is this expected behaviour?
 
 d5r5fdj6u5ieml= begin;
 BEGIN
 d5r5fdj6u5ieml= set synchronous_commit = 'on';
 SET
 d5r5fdj6u5ieml= insert into data values ('baz');
 INSERT 0 1
 d5r5fdj6u5ieml= commit;
 ^CCancel request sent
 WARNING:  canceling wait for synchronous replication due to user request
 DETAIL:  The transaction has already committed locally, but might not
 have been replicated to the standby.
 COMMIT
 d5r5fdj6u5ieml= select * from data;
  foo 
 -
  bar
  baz
 (2 rows)

Did the inserted row also arrive at the standby?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous commit not... synchronous?

2012-11-05 Thread Daniel Farina
On Fri, Nov 2, 2012 at 10:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On 2 November 2012 16:27, Jeff Janes jeff.ja...@gmail.com wrote:
 It would be.  But you are not cancelling the commit, you are
 *attempting* to cancel the commit.  The message you receive explains
 to what extend your attempt succeeded.

 That is correct.

 It is possible to cancel the COMMIT, but only until it happens.

 If people want full two phase commit, that option exists also.

I see why it is implemented this way, but it's also still pretty
unsatisfying because it means that with cancellation requests clients
are in theory able to commit an unlimited number of transactions,
synchronous commit or no.

It's probably close enough for most purposes, but what would you think
about a 2PC-ish mode at the physical (rather than logical/PREPARE
TRANSACTION) level, whereby the master would insist that its standbys
have more data written (or at least received...or at least sent) than
it has guaranteed flushed to its own xlog at any point?

This would be a nice invariant to have when dealing with a large
number of systems, allowing for the catching of some tricky bugs, that
standbys are always greater-than-or-equal-to the master's XLogPos.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous commit not... synchronous?

2012-11-05 Thread Jeff Janes
On Fri, Nov 2, 2012 at 11:41 AM, Daniel Farina dan...@heroku.com wrote:
 On Fri, Nov 2, 2012 at 10:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On 2 November 2012 16:27, Jeff Janes jeff.ja...@gmail.com wrote:
 It would be.  But you are not cancelling the commit, you are
 *attempting* to cancel the commit.  The message you receive explains
 to what extend your attempt succeeded.

 That is correct.

 It is possible to cancel the COMMIT, but only until it happens.

 If people want full two phase commit, that option exists also.

 I see why it is implemented this way, but it's also still pretty
 unsatisfying because it means that with cancellation requests clients
 are in theory able to commit an unlimited number of transactions,
 synchronous commit or no.

What evil does this allow the client to perpetrate?

 It's probably close enough for most purposes, but what would you think
 about a 2PC-ish mode at the physical (rather than logical/PREPARE
 TRANSACTION) level, whereby the master would insist that its standbys
 have more data written (or at least received...or at least sent) than
 it has guaranteed flushed to its own xlog at any point?

Then if they interrupt the commit, the remote has it permanently but
the local does not.  That would be corruption.

What the DETAIL doesn't make clear about the current system is that
the commit *will* be replicated to the standby *eventually*, unless
the master burns down first.  In particular, if any commit after this
one makes it to the standby, then the interrupted one is guaranteed to
have made it as well.

 This would be a nice invariant to have when dealing with a large
 number of systems, allowing for the catching of some tricky bugs, that
 standbys are always greater-than-or-equal-to the master's XLogPos.

Could you elaborate on that?

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fwd: Stalled post to pgsql-hackers

2012-11-05 Thread Pavel Stehule
Hello

I cannot to send a patch to mailing list

Regards

Pavel Stehule


-- Forwarded message --
From:  pgsql-hackers-ow...@postgresql.org
Date: 2012/11/3
Subject: Stalled post to pgsql-hackers
To: Pavel Stehule pavel.steh...@gmail.com


Your message to pgsql-hackers has been delayed, and requires the approval
of the moderators, for the following reason(s):

GLOBAL ADMIN BODY:  /\bcancel\b/i matched cancel at line number 9.

If you do not wish the message to be posted, or have other concerns,
please send a message to the list owners at the following address:
  pgsql-hackers-ow...@postgresql.org


-- Přeposlaná zpráva --
From: Pavel Stehule pavel.steh...@gmail.com
To: PostgreSQL Hackers pgsql-hackers@postgresql.org
Cc:
Date: Sat, 3 Nov 2012 19:45:36 +0100
Subject: gset updated patch
Hello

here is a updated patch

Regards

Pavel


gset_12.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Stalled post to pgsql-hackers

2012-11-05 Thread Magnus Hagander
That message just means it's stuck in moderation. You just have to wait for
a moderator to approve it (which I just did now)

/Magnus
On Nov 5, 2012 10:19 AM, Pavel Stehule pavel.steh...@gmail.com wrote:

 Hello

 I cannot to send a patch to mailing list

 Regards

 Pavel Stehule


 -- Forwarded message --
 From:  pgsql-hackers-ow...@postgresql.org
 Date: 2012/11/3
 Subject: Stalled post to pgsql-hackers
 To: Pavel Stehule pavel.steh...@gmail.com


 Your message to pgsql-hackers has been delayed, and requires the approval
 of the moderators, for the following reason(s):

 GLOBAL ADMIN BODY:  /\bcancel\b/i matched cancel at line number 9.

 If you do not wish the message to be posted, or have other concerns,
 please send a message to the list owners at the following address:
   pgsql-hackers-ow...@postgresql.org


 -- Přeposlaná zpráva --
 From: Pavel Stehule pavel.steh...@gmail.com
 To: PostgreSQL Hackers pgsql-hackers@postgresql.org
 Cc:
 Date: Sat, 3 Nov 2012 19:45:36 +0100
 Subject: gset updated patch
 Hello

 here is a updated patch

 Regards

 Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




Re: [HACKERS] Fwd: Stalled post to pgsql-hackers

2012-11-05 Thread Pavel Stehule
Thanks Magnus :)

Pavel

2012/11/5 Magnus Hagander mag...@hagander.net:
 That message just means it's stuck in moderation. You just have to wait for
 a moderator to approve it (which I just did now)

 /Magnus

 On Nov 5, 2012 10:19 AM, Pavel Stehule pavel.steh...@gmail.com wrote:

 Hello

 I cannot to send a patch to mailing list

 Regards

 Pavel Stehule


 -- Forwarded message --
 From:  pgsql-hackers-ow...@postgresql.org
 Date: 2012/11/3
 Subject: Stalled post to pgsql-hackers
 To: Pavel Stehule pavel.steh...@gmail.com


 Your message to pgsql-hackers has been delayed, and requires the approval
 of the moderators, for the following reason(s):

 GLOBAL ADMIN BODY:  /\bcancel\b/i matched cancel at line number 9.

 If you do not wish the message to be posted, or have other concerns,
 please send a message to the list owners at the following address:
   pgsql-hackers-ow...@postgresql.org


 -- Přeposlaná zpráva --
 From: Pavel Stehule pavel.steh...@gmail.com
 To: PostgreSQL Hackers pgsql-hackers@postgresql.org
 Cc:
 Date: Sat, 3 Nov 2012 19:45:36 +0100
 Subject: gset updated patch
 Hello

 here is a updated patch

 Regards

 Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2012-11-05 Thread Alvaro Herrera
FWIW I have gotten a lot of feedback about this patch, and since I don't
have time right now to produce an updated version, that I'm going to
close this as Returned with Feedback, and submit an updated version to
the upcoming commitfest.

This patch still needs much more review -- for example, as far as I
know, the multixact.c changes have gone largely unreviewed; they have
changed somewhat since Noah reviewed them (back in version 15 or so, I
think).  Of course, to me it all makes sense, but then I'm only its
author.

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
Everybody understands Mickey Mouse. Few understand Hermann Hesse.
Hardly anybody understands Einstein. And nobody understands Emperor Norton.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] gset updated patch

2012-11-05 Thread Pavel Stehule
Hello

here is a updated patch

Regards

Pavel


gset_12.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2012-11-05 Thread Andres Freund
On Monday, November 05, 2012 02:37:15 PM Alvaro Herrera wrote:
 FWIW I have gotten a lot of feedback about this patch, and since I don't
 have time right now to produce an updated version, that I'm going to
 close this as Returned with Feedback, and submit an updated version to
 the upcoming commitfest.
 
 This patch still needs much more review -- for example, as far as I
 know, the multixact.c changes have gone largely unreviewed; they have
 changed somewhat since Noah reviewed them (back in version 15 or so, I
 think).  Of course, to me it all makes sense, but then I'm only its
 author.

There also hasn't been any recent performance testing. I am not sure if I can 
get the time to do so before the next commitfest...

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Magnus Hagander
On Mon, Oct 22, 2012 at 4:24 PM, Stephen Frost sfr...@snowman.net wrote:

 Magnus, all,

 * Magnus Hagander (mag...@hagander.net) wrote:
  On Thu, Oct 18, 2012 at 5:59 PM, Robert Haas robertmh...@gmail.com
 wrote:
   That seems like a sufficiently long deprecation window, but is gssapi
   a full substitute for krb5?  I don't really have a strong opinion on
   this, not being a user myself.
 
  I'm pretty sure that it is.
 
  Stephen, you usually have comments about the Kerberos stuff - want to
  comment on this one? :)

 The biggest risk that I can think of regarding deprecating krb5 would be
 platforms (if any still exist...) which don't have GSSAPI.  Is it


I have no idea what platform that would be. Both the standard
implementations of krb5 have supported gssapi since forever. The only
nonstandard environment we support there is Windows, and that one *only*
has support for GSSAPI/SSPI.



 possible to see that from the buildfarm information or from the
 configure results that people have for any strange/different platforms
 out there?  The other question would be if we think anyone's actually


Well, we can remove it and see if it breaks :)



 using krb5 on those platforms and/or would people in those situations be
 willing/able to move to a different library which supports GSSAPI.

 I'm all for deprecating krb5 myself, but I wouldn't want to break things
 for people without good cause.


It's been deprecated for *years*. This is about removing it.

The cause would be to keep the code clean and less maintenance of security
code in general, is a good thing.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Allow WAL information to recover corrupted pg_controldata

2012-11-05 Thread Alvaro Herrera
I'm not sure what to do with this patch.  There was some resistance to
the idea originally; then after some discussion, there was some
apparent agreement that it might be useful on occasion.  Later, a patch
was posted, but there was almost no review of it; except to say that it
should probably be reworked on top of an hypothetical, future XLogReader
feature.

Since it doesn't look like we're going anywhere with it soon, I'm going
to close it as returned with feedback.  Hopefully, if we get XLogReader
in 9.3, we will have time to rebase this patch on top of that.  (I
invite Amit to give Heikki's version of XLogReader patch a look.)

(It is very hard to track down vague references to old threads that
aren't properly linked in new threads.  Please make sure to reply to old
emails, or at least to give Message-Ids or URLs when starting new
threads.  I am replying to one message of each old thread here.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unresolved error 0xC0000409 on Windows Server

2012-11-05 Thread Matthew Gerber
On Sun, Nov 4, 2012 at 3:39 AM, Craig Ringer ring...@ringerc.id.au wrote:

  On 11/04/2012 08:47 AM, Matthew Gerber wrote:


 So I attached the VS debugger, but the server died without raising an
 exception in VS. Not sure what's going on here.


   Try creating a directory called crashdumps in the data directory, at
 the same level as pg_xlog and pg_clog etc. Give the postgresql user
 the full control permission on it. Then run the test again.

 Do any minidump files appear in the directory? If so, you can examine them
 with windbg or Visual Studio to see where the crash happened.


I did this but nothing appears in crashdumps after the server crashes. The
latest test I did included the addition of this directory and the disabling
of my antivirus software. Nothing seems to have changed. Following Tom's
suggestion, I'll try to get a stack trace again (last time didn't produce
anything).

The only other thing I've noticed is that the crash always occurs when
inserting into the places table (definition in previous email), even
though there are two other tables that are also receiving inserts. This is
odd to me. Any thoughts?

Matt


Re: [HACKERS] Statistics and selectivity estimation for ranges

2012-11-05 Thread Alvaro Herrera
What's going on with this patch?  I haven't seen any activity in a
while.  Should I just move this to the next commitfest?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unresolved error 0xC0000409 on Windows Server

2012-11-05 Thread Merlin Moncure
On Fri, Nov 2, 2012 at 8:00 PM, Noah Misch n...@leadboat.com wrote:
 hm, several times over the last couple of months (both on postgres 9.1
 and 9.2), i've seen  a similar crash, but on linux.  It hits the log
 like this:

 Execution halted  (~ 200x)
 Error: segfault from C stack overflow
 Execution halted  (~ 30x)
 LOG:  server process (PID 19882) was terminated by signal 11: Segmentation 
 fault
 LOG:  terminating any other active server processes

 note the lack of LOG in 'Execution halted', etc.  This has happened
 several times, on different servers using different workloads (but
 always under load).  As of yet, I've been unable to get a core but I
 hope to get one next time it happens.  I wonder if it's a similar
 cause?

 Google suggests those unadorned messages originate in R.  Do the affected
 systems use PL/R?  If so ...

yes -- they do.  I was pretty certain that no R code was running at
the time of the crash but not 100% sure.  That's a big clue -- thanks.
 Investigating...

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow WAL information to recover corrupted pg_controldata

2012-11-05 Thread Amit Kapila
On Monday, November 05, 2012 7:33 PM  Alvaro Herrera wrote:
 I'm not sure what to do with this patch.  There was some resistance to
 the idea originally; then after some discussion, there was some
 apparent agreement that it might be useful on occasion.  Later, a patch
 was posted, but there was almost no review of it; except to say that it
 should probably be reworked on top of an hypothetical, future XLogReader
 feature.
 
 Since it doesn't look like we're going anywhere with it soon, I'm going
 to close it as returned with feedback.  Hopefully, if we get XLogReader
 in 9.3, we will have time to rebase this patch on top of that.  (I
 invite Amit to give Heikki's version of XLogReader patch a look.)

The patch for which Heikki has given comment
(https://commitfest.postgresql.org/action/patch_view?id=897) is already
moved to next CF.
This was not related to XLogReader. However as there is not much interest in
this feature, so it is okay. 
 
 (It is very hard to track down vague references to old threads that
 aren't properly linked in new threads.  Please make sure to reply to old
 emails, or at least to give Message-Ids or URLs when starting new
 threads.  I am replying to one message of each old thread here.)

My mistake, I am sorry for that and I shall try to take care for future
work.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Stephen Frost
Magnus,

* Magnus Hagander (mag...@hagander.net) wrote:
 I have no idea what platform that would be. Both the standard
 implementations of krb5 have supported gssapi since forever. The only
 nonstandard environment we support there is Windows, and that one *only*
 has support for GSSAPI/SSPI.

There are some older unixes that had their own Kerberos libraries,
that's what I was specifically referring to.  I agree that there's
really only 2 implementations among the major free/open source
distributions and that those have supported GSSAPI for a long time.

 Well, we can remove it and see if it breaks :)

That was more-or-less what I was encouraging.. :D

The only question there is if we're even building w/ krb5 and/or
gssapi support on the buildfarm by default today..?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 9:57 AM, Stephen Frost sfr...@snowman.net wrote:
 Magnus,

 * Magnus Hagander (mag...@hagander.net) wrote:
 I have no idea what platform that would be. Both the standard
 implementations of krb5 have supported gssapi since forever. The only
 nonstandard environment we support there is Windows, and that one *only*
 has support for GSSAPI/SSPI.

 There are some older unixes that had their own Kerberos libraries,
 that's what I was specifically referring to.  I agree that there's
 really only 2 implementations among the major free/open source
 distributions and that those have supported GSSAPI for a long time.

 Well, we can remove it and see if it breaks :)

 That was more-or-less what I was encouraging.. :D

 The only question there is if we're even building w/ krb5 and/or
 gssapi support on the buildfarm by default today..?

Well, looking at the BF:

http://www.pgbuildfarm.org/cgi-bin/show_status.pl

...it seems there are LOTS of machines building with krb5, and NONE with gssapi.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Magnus Hagander
On Mon, Nov 5, 2012 at 6:10 PM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Nov 5, 2012 at 9:57 AM, Stephen Frost sfr...@snowman.net wrote:
  Magnus,
 
  * Magnus Hagander (mag...@hagander.net) wrote:
  I have no idea what platform that would be. Both the standard
  implementations of krb5 have supported gssapi since forever. The only
  nonstandard environment we support there is Windows, and that one *only*
  has support for GSSAPI/SSPI.
 
  There are some older unixes that had their own Kerberos libraries,
  that's what I was specifically referring to.  I agree that there's
  really only 2 implementations among the major free/open source
  distributions and that those have supported GSSAPI for a long time.
 
  Well, we can remove it and see if it breaks :)
 
  That was more-or-less what I was encouraging.. :D
 
  The only question there is if we're even building w/ krb5 and/or
  gssapi support on the buildfarm by default today..?

 Well, looking at the BF:

 http://www.pgbuildfarm.org/cgi-bin/show_status.pl

 ...it seems there are LOTS of machines building with krb5, and NONE with
 gssapi.


AFAICS there is no icon for gssapi. So your first statement is correct, but
the second one isn't.

That said, if we don't have animals building with gssapi, that's a problem
regardless of what we're doing here. What's the easiest way to make that
happen?

And can we get stats somehow of how many actually do build with gssapi even
though there is no icon for it? Andrew?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] WIP checksums patch

2012-11-05 Thread Robert Haas
On Mon, Oct 29, 2012 at 4:31 PM, Jim Nasby j...@nasby.net wrote:
 For whatever it's worth... we (and presumably others) still use londiste (or
 Slony) as our upgrade path, so we could tolerate a cluster-wide setting.
 We'd just set it when building new clusters via londiste and forget about
 it.

 So I'd rather see this get in at a cluster level than not make it at all
 while we wait for something better.

Yeah.  I definitely think that we could shed an enormous amount of
complexity by deciding that this is, for now, an option that can only
be selected at initdb time.  That would remove approximately 85% of
everything I've ever disliked about this patch - without, I think,
precluding the possibility of improving things later.

It also occurred to me that another way to reduce the scope of this
change would be to have a first version that does CRCs only for SLRU
pages.  That would be useful for verifying the integrity of some of
our most critical data (pg_clog) and be a useful building block toward
a more complete implementation.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] out of memory

2012-11-05 Thread Robert Haas
On Tue, Oct 30, 2012 at 6:08 AM, Tatsuo Ishii is...@postgresql.org wrote:
 i have sql file (it's size are 1GB  )
 when i execute it then the String is 987098801 bytr too long for encoding
 conversion  error occured .
 pls give me solution about

 You hit the upper limit of internal memory allocation limit in
 PostgreSQL. IMO, there's no way to avoid the error except you use
 client encoding identical to backend.

We recently had a customer who suffered a failed in pg_dump because
the quadruple-allocation required by COPY OUT for an encoding
conversion exceeded allocatable memory.  I wonder whether it would be
possible to rearrange things so that we can do a streaming encoding
conversion.  That is, if we have a large datum that we're trying to
send back to the client, could we perhaps chop off the first 50MB or
so, do the encoding on that amount of data, send the data to the
client, lather, rinse, repeat?

Your recent work to increase the maximum possible size of large
objects (for which I thank you) seems like it could make these sorts
of issues more common.  As objects get larger, I don't think we can go
on assuming that it's OK for peak memory utilization to keep hitting
5x or more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] out of memory

2012-11-05 Thread John R Pierce

On 11/05/12 9:27 AM, Robert Haas wrote:

That is, if we have a large datum that we're trying to
send back to the client, could we perhaps chop off the first 50MB or
so, do the encoding on that amount of data, send the data to the
client, lather, rinse, repeat?


I'd suggest work_mem sized chunks for this?



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What are the advantages of not being able to access multiple databases with one connection?

2012-11-05 Thread Robert Haas
On Tue, Oct 30, 2012 at 8:37 AM, crocket crockabisc...@gmail.com wrote:
 MySQL permits a connection to access multiple databases.
 But Postgresql restricts a connection to one database.
 I think postgresql database connection is somewhat limited.

 Is it an old and decrepit design? or does it deserve some appreciations?

I think it deserves some appreciation.  Each database is completely
isolated in terms of privileges, which is sometimes useful.  Also, if
you somehow manage to fry the system catalogs in one database, the
other ones can still survive.  The role played by databases in MySQL
is served by schemas in PostgreSQL, so I don't see that there is a
functional gap here.  I am not sure I'd bother implementing the
multi-database concept today if we didn't have it already ... but it
seems kind of pointless to rip it out given that it's already there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PL/Python: Add spidata to all spiexceptions

2012-11-05 Thread Robert Haas
On Wed, Oct 31, 2012 at 5:33 AM, Jan Urbański wulc...@wulczer.org wrote:
 On 30/10/12 22:06, Oskari Saarenmaa wrote:

 PL/Python maps Python SPIError exceptions with 'spidata' attribute into
 SQL
 errors.  PL/Python also creates classes in plpy.spiexceptions for all
 known
 errors but does not initialize their spidata, so when a PL/Python function
 raises such an exception it is not recognized properly and is always
 reported as an internal error.

 You're right, I never thought of the possibility of user code explicitly
 throwing SPIError exceptions.

 The root issue is that PLy_elog will only set errcode if it finds the
 spidata attribute, but I think passing error details through that
 attribute is a kludge more than something more code should rely on.

 Here's an alternative patch that takes advantage of the already present (and
 documented) sqlstate variable to set the error code when handling SPIError
 exceptions.

 I also used your test case and added another one, just in case.

You should probably add this to the next CF so we don't forget about it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PL/Python: Add spidata to all spiexceptions

2012-11-05 Thread Jan Urbański

On 05/11/12 18:35, Robert Haas wrote:

On Wed, Oct 31, 2012 at 5:33 AM, Jan Urbańskiwulc...@wulczer.org  wrote:

On 30/10/12 22:06, Oskari Saarenmaa wrote:


PL/Python maps Python SPIError exceptions with 'spidata' attribute into
SQL
errors.


Here's an alternative patch that takes advantage of the already present (and
documented) sqlstate variable to set the error code when handling SPIError
exceptions.

I also used your test case and added another one, just in case.


You should probably add this to the next CF so we don't forget about it.


I will, as soon as I recover my community account.

Cheers,
Jan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2012 commitfest

2012-11-05 Thread Alvaro Herrera
I said last week:

 Waiting on Author: 1
 Needs Review: 10
 Ready for Committer: 7

Now there are only 9 patches Ready for committer.  All other patches
have either been moved to the next commitfest, or returned with
feedback.

So we've made some progress, but we need a final push from committers.
A few of these patches have had a committer said they would look onto
them, so I've left them in the September commitfest in case one of them
has time to look onto them this week.

Oleg, Teodor:
 Incorrect behaviour when using a GiST index on points
 * This is a bug fix.

Greg Stark:
 tuplesort memory usage: grow_memtuples
 Trim trailing NULL columns

Tom Lane:
 Updatable views

Magnus:
 Make pg_basebackup configure and start standby

Andrew Dunstan:
 parallel pg_dump

Heikki?
 Decrease GiST bloat when penalties are identical

Magnus? Andrew?
 Fix console prompt encoding on Windows

?
 plpgsql_check_function

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Robert Haas
On Tue, Oct 30, 2012 at 5:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I looked into the complaint of unreasonable planner runtime in bug #7626,
 http://archives.postgresql.org/pgsql-bugs/2012-10/msg00232.php

 In the given example, the indexed relation foo has join clauses with
 30 other relations.  The code that I added in commit
 3b8968f25232ad09001bf35ab4cc59f5a501193e will try all 2^30 combinations
 of those rels as possible outer relations for a parameterized indexscan
 :-(.  So clearly, the idea that we can just try everything and not have
 some kind of heuristic restriction isn't going to work.

You know, when I read this, my first thought was ... why is this an
exponential relationship instead of a linear one?  Even now, I'm not
sure I quite understand that.  With a parameterized path, we get an
index scan (or index-only scan) with a.id taking its value from some
outer scan, but it can't take its value from more than one outer scan.
 Can it?  So what does it mean to parameterize the scan of foo by both
ag1 (aid) and ag2 (aid)?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table tablename add column - breaks pl/pgsql function returns tablename

2012-11-05 Thread Robert Haas
On Thu, Nov 1, 2012 at 12:14 AM, Amit kapila amit.kap...@huawei.com wrote:
 Is this very hard to fix?

Currently the compiled body is not discarded on DDL's, so I believe it is 
 not a bug as per current implementation.
However it can be thought of as a new feature.

Seems like a bug to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in ALTER COLUMN SET DATA TYPE ?

2012-11-05 Thread Tom Lane
Pavan Deolasee pavan.deola...@gmail.com writes:
 Please see attached patch which does what you suggested above. May be it
 needs a little more commentary to record why we made this specific change.
 Please let me know if you think so and want me to do that.

Applied with some cosmetic adjustments and addition of a regression
test.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doc patch, distinguish sections with an empty row in error code table

2012-11-05 Thread Robert Haas
On Fri, Oct 26, 2012 at 10:03 AM, Karl O. Pinc k...@meme.com wrote:
 This patch adds an empty row before each section header
 in the error codes table in the docs.

 I tried not putting an empty row before the first
 section, but it looks better to always have
 an empty row.  IMO.

 File: errorcode_table.patch

 Applies against head.

This doesn't seem like a particularly good idea to me, but what do
other people think?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PL/Python: Add spidata to all spiexceptions

2012-11-05 Thread Jan Urbański

On 05/11/12 19:07, Jan Urbański wrote:

On 05/11/12 18:35, Robert Haas wrote:


You should probably add this to the next CF so we don't forget about it.


I will, as soon as I recover my community account.


Added as https://commitfest.postgresql.org/action/patch_view?id=971

J


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update obsolete text in indexam.sgml

2012-11-05 Thread Tom Lane
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 ISTM it would be better to update the text about index cost estimation in
 indexam.sgml.  Please find attached a patch.

I'm not too thrilled with the proposed patch.  In the first place, I
don't think it's necessary to address costing of index order-by
expressions in an introductory explanation.  It seems likely that no FDW
will ever need to deal with that at all.  In the second, this change
makes the code less clear, not more so, because it introduces a variable
indexQuals without showing where you would get that value from.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Peter Eisentraut
On 11/5/12 12:13 PM, Magnus Hagander wrote:
 AFAICS there is no icon for gssapi. So your first statement is correct,
 but the second one isn't.

Yeah, for example it's used here:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=smewdt=2012-11-02%2011%3A38%3A04



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Andrew Dunstan


On 11/05/2012 12:13 PM, Magnus Hagander wrote:



http://www.pgbuildfarm.org/cgi-bin/show_status.pl

...it seems there are LOTS of machines building with krb5, and
NONE with gssapi.



AFAICS there is no icon for gssapi. So your first statement is 
correct, but the second one isn't.






If someone would like to give me an icon I'll add it.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Timing Events

2012-11-05 Thread Josh Berkus

 I think auto_explain would help you solve such rare incidents
 if it could dump several statistics into server log, including lock
 waits and block reads/writes statistic per-session, for example.
 
 Do we have something to add to auto_explain?

Well, to be frank, I've never found auto-explain to be useful because of
its restriction to superuser sessions.  It's an interesting
proof-of-concept, but completely useless at any production site.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Magnus Hagander
On Mon, Nov 5, 2012 at 7:50 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 11/05/2012 12:13 PM, Magnus Hagander wrote:



 
 http://www.pgbuildfarm.org/**cgi-bin/show_status.plhttp://www.pgbuildfarm.org/cgi-bin/show_status.pl

 ...it seems there are LOTS of machines building with krb5, and
 NONE with gssapi.



 AFAICS there is no icon for gssapi. So your first statement is correct,
 but the second one isn't.




 If someone would like to give me an icon I'll add it.


Well, if we're removing krb5 we could reuse that one :)

And no, I don't have any good ideas icon-wise to distinct gssapi from
krb5...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Oct 30, 2012 at 5:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I looked into the complaint of unreasonable planner runtime in bug #7626,
 http://archives.postgresql.org/pgsql-bugs/2012-10/msg00232.php

 You know, when I read this, my first thought was ... why is this an
 exponential relationship instead of a linear one?

Because it's considering *combinations* of outer relations for a
parameterized scan.  For instance consider an index on t(a,b)
and a query
WHERE t.a = x.c1 AND t.b = y.c2
There are three different parameterized paths we could create: one
relying on x only, one relying on y only, one relying on both.
The one relying on y only is probably going to suck, if this is a
btree index, but at the level we're working at here that's not yet
apparent.  The other two are definitely both worthy of consideration,
since it might or might not be worth it to join x and y first in order
to use both conditions in scanning t.

So in general, given join clauses that reference N different outer
relations, you could have as many as 2^N-1 sets of outer relations that
could possibly generate usefully-different parameterized paths.  In
practice, since all these clauses must be usable with the same index,
there's probably not nearly that many useful combinations --- but again,
it's hard to know exactly which ones are winners in advance of doing any
cost calculations.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 2:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Oct 30, 2012 at 5:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I looked into the complaint of unreasonable planner runtime in bug #7626,
 http://archives.postgresql.org/pgsql-bugs/2012-10/msg00232.php

 You know, when I read this, my first thought was ... why is this an
 exponential relationship instead of a linear one?

 Because it's considering *combinations* of outer relations for a
 parameterized scan.  For instance consider an index on t(a,b)
 and a query
 WHERE t.a = x.c1 AND t.b = y.c2
 There are three different parameterized paths we could create: one
 relying on x only, one relying on y only, one relying on both.

Sure, but that example is different from the test case provided in the
bug report.  I agree that here we need to try paths parameterized by
a, b, or both a and b.  Things might blow up multiplicatively, because
we have join clauses referencing both t.a and t.b.  But they shouldn't
blow up exponentially, because each of t.a and t.b can only be
parameterized by ONE thing (I think).  And in the example in the bug
report, only one column of the table (foo.id) is mentioned.  foo.id
can be driven by ag1.aid OR ag2.aid OR ag3.aid OR ..., but not more
than one of those at a time.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Nov 5, 2012 at 2:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 There are three different parameterized paths we could create: one
 relying on x only, one relying on y only, one relying on both.

 Sure, but that example is different from the test case provided in the
 bug report.  I agree that here we need to try paths parameterized by
 a, b, or both a and b.  Things might blow up multiplicatively, because
 we have join clauses referencing both t.a and t.b.  But they shouldn't
 blow up exponentially, because each of t.a and t.b can only be
 parameterized by ONE thing (I think).

Um, no.  This is a useful counterexample:

WHERE t.a  x.c1 AND t.a  y.c2

With a range constraint like this one, it's possible for the
doubly-parameterized path to be quite useful while either
singly-parameterized path is basically useless.  And these examples
aren't even going into cases you might get with non-btree indexes,
where clauses could interact in much more complicated ways.

 And in the example in the bug
 report, only one column of the table (foo.id) is mentioned.  foo.id
 can be driven by ag1.aid OR ag2.aid OR ag3.aid OR ..., but not more
 than one of those at a time.

In the example, we do figure out that the clauses are redundant, but
only further downstream.  The code that's got the problem can't really
assume such a thing.  As patched, it will indeed limit what it considers
to at most one additional clause per index column, once it's hit the
heuristic limit --- but it's entirely possible for it to miss useful
combinations because of that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 2:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Mon, Nov 5, 2012 at 2:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 There are three different parameterized paths we could create: one
 relying on x only, one relying on y only, one relying on both.

 Sure, but that example is different from the test case provided in the
 bug report.  I agree that here we need to try paths parameterized by
 a, b, or both a and b.  Things might blow up multiplicatively, because
 we have join clauses referencing both t.a and t.b.  But they shouldn't
 blow up exponentially, because each of t.a and t.b can only be
 parameterized by ONE thing (I think).

 Um, no.  This is a useful counterexample:

 WHERE t.a  x.c1 AND t.a  y.c2

 With a range constraint like this one, it's possible for the
 doubly-parameterized path to be quite useful while either
 singly-parameterized path is basically useless.  And these examples
 aren't even going into cases you might get with non-btree indexes,
 where clauses could interact in much more complicated ways.

Well, OK.  So maybe you also need the operator to be the same as well.

 And in the example in the bug
 report, only one column of the table (foo.id) is mentioned.  foo.id
 can be driven by ag1.aid OR ag2.aid OR ag3.aid OR ..., but not more
 than one of those at a time.

 In the example, we do figure out that the clauses are redundant, but
 only further downstream.  The code that's got the problem can't really
 assume such a thing.  As patched, it will indeed limit what it considers
 to at most one additional clause per index column, once it's hit the
 heuristic limit --- but it's entirely possible for it to miss useful
 combinations because of that.

Seems unfortunate, but I don't understand the code well enough to know
how to do better.


-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous commit not... synchronous?

2012-11-05 Thread Daniel Farina
On Sun, Nov 4, 2012 at 6:00 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Nov 3, 2012 at 5:44 PM, Florian Weimer f...@deneb.enyo.de wrote:
 * Daniel Farina:
 The idea of canceling a COMMIT statement causing a COMMIT seems pretty
 strange to me.

 Canceling commits is inherently racy, so I'm not sure if this behavior
 so strange after all.

 Yeah.  You can't make the local fsync() and the remote fsync() happen
 at exactly the same moment in time.  No implementation can do that,
 anywhere, ever.  Our implementation happens to require the local
 fsync() to always be done first.

I don't think there is a (unachievable) requirement of simultaneous
flush, only that two machines have flushed (or met whatever durability
criteria) strictly more than the position of the commit in question.
This mean some changes are written to some place once, but
acknowledging commit requires proof of two-safety.

I can see how in some corner cases this might cause orphaning of
synchronous standbys that write, but cannot acknowledge.

If the point of synchronous commit is to reach exact two-safety by
waiting a while for other agents to process data, it would seem that
the current model could use some less-invasive tweaking, as-is one can
succeed in an unbounded number of commits in a degenerate case.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Nov 5, 2012 at 2:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Um, no.  This is a useful counterexample:
  WHERE t.a  x.c1 AND t.a  y.c2

 Well, OK.  So maybe you also need the operator to be the same as well.

Nope.  A counterexample to that claim is a GIN index on an array column:

WHERE t.arraycol @ array[1,2,3] AND t.arraycol @ array[4,5,6]

This restriction is equivalent to

WHERE t.arraycol @ array[1,2,3,4,5,6]

which is substantially more selective than either constraint alone.
If the two RHS arrays are not constants, but are coming from different
tables x and y, then we have something isomorphic to the previous
example (at least from the perspective of indxpath.c), but it would
not be good for indxpath.c to assume that these clauses couldn't be
useful together.

We *can* make a simplifying assumption of the kind you suggest when
we know that the clauses were all generated from the same equivalence
class, because then we have very strong assumptions about what the
clauses' semantics are.  (And indeed the patch does take care of that
case separately.)  But for the general case of non-equijoin clauses
we can't assume very much at all about whether clauses are redundant,
at least not without knowledge that indxpath.c hasn't got.

   As patched, it will indeed limit what it considers
 to at most one additional clause per index column, once it's hit the
 heuristic limit --- but it's entirely possible for it to miss useful
 combinations because of that.

 Seems unfortunate, but I don't understand the code well enough to know
 how to do better.

Me either.  What I will say is that as patched, the code will still
find all useful clause combinations as long as there aren't too many
other relations involved.  I've not been able to think of another way
of restricting the search that doesn't reject possibly-useful
combinations even in otherwise-very-simple queries.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
Magnus reported that a customer with a million tables was finding
pg_upgrade slow.  I had never considered many table to be a problem, but
decided to test it.  I created a database with 2k tables like this:

CREATE TABLE test1990 (x SERIAL);

Running the git version of pg_upgrade on that took 203 seconds.  Using
synchronous_commit=off dropped the time to 78 seconds.  This was tested
on magnetic disks with a write-through cache.  (No change on an SSD with
a super-capacitor.)

I don't see anything unsafe about having pg_upgrade use
synchronous_commit=off.  I could set it just for the pg_dump reload, but
it seems safe to just use it always.  We don't write to the old cluster,
and if pg_upgrade fails, you have to re-initdb the new cluster anyway.

Patch attached.  I think it should be applied to 9.2 as well.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/server.c b/contrib/pg_upgrade/server.c
new file mode 100644
index a9f9d85..e64d0c4
*** a/contrib/pg_upgrade/server.c
--- b/contrib/pg_upgrade/server.c
*** start_postmaster(ClusterInfo *cluster)
*** 207,216 
  	 * vacuums can still happen, so we set autovacuum_freeze_max_age to its
  	 * maximum.  We assume all datfrozenxid and relfrozen values are less than
  	 * a gap of 20 from the current xid counter, so autovacuum will
! 	 * not touch them.
  	 */
  	snprintf(cmd, sizeof(cmd),
! 			 \%s/pg_ctl\ -w -l \%s\ -D \%s\ -o \-p %d %s %s%s\ start,
  		  cluster-bindir, SERVER_LOG_FILE, cluster-pgconfig, cluster-port,
  			 (cluster-controldata.cat_ver =
  			  BINARY_UPGRADE_SERVER_FLAG_CAT_VER) ? -b :
--- 207,217 
  	 * vacuums can still happen, so we set autovacuum_freeze_max_age to its
  	 * maximum.  We assume all datfrozenxid and relfrozen values are less than
  	 * a gap of 20 from the current xid counter, so autovacuum will
! 	 * not touch them.  synchronous_commit=off improves object creation speed.
  	 */
  	snprintf(cmd, sizeof(cmd),
! 			 \%s/pg_ctl\ -w -l \%s\ -D \%s\ -o \-p %d 
! 			 -c synchronous_commit=off %s %s%s\ start,
  		  cluster-bindir, SERVER_LOG_FILE, cluster-pgconfig, cluster-port,
  			 (cluster-controldata.cat_ver =
  			  BINARY_UPGRADE_SERVER_FLAG_CAT_VER) ? -b :

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Magnus reported that a customer with a million tables was finding
 pg_upgrade slow.

You sure there's not an O(N^2) issue in there somewhere?

 I don't see anything unsafe about having pg_upgrade use
 synchronous_commit=off.

No objection, but this seems unlikely to be better than linear speedup,
with a not-terribly-large constant factor.

BTW, does pg_upgrade run pg_restore in --single-transaction mode?
That would probably make synchronous_commit moot, at least for that
step.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limiting the number of parameterized indexpaths created

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 3:07 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Mon, Nov 5, 2012 at 2:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Um, no.  This is a useful counterexample:
  WHERE t.a  x.c1 AND t.a  y.c2

 Well, OK.  So maybe you also need the operator to be the same as well.

 Nope.  A counterexample to that claim is a GIN index on an array column:

 WHERE t.arraycol @ array[1,2,3] AND t.arraycol @ array[4,5,6]

 This restriction is equivalent to

 WHERE t.arraycol @ array[1,2,3,4,5,6]

 which is substantially more selective than either constraint alone.
 If the two RHS arrays are not constants, but are coming from different
 tables x and y, then we have something isomorphic to the previous
 example (at least from the perspective of indxpath.c), but it would
 not be good for indxpath.c to assume that these clauses couldn't be
 useful together.

Neat example.

 We *can* make a simplifying assumption of the kind you suggest when
 we know that the clauses were all generated from the same equivalence
 class, because then we have very strong assumptions about what the
 clauses' semantics are.  (And indeed the patch does take care of that
 case separately.)  But for the general case of non-equijoin clauses
 we can't assume very much at all about whether clauses are redundant,
 at least not without knowledge that indxpath.c hasn't got.

OK.  Fortunately, I don't think we need to care too much about that
case, since non-equijoins are pretty rare.  A reasonable heuristic
restriction seems fine for that case ... at least until the next
problem case shows up.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Magnus Hagander
On Mon, Nov 5, 2012 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Bruce Momjian br...@momjian.us writes:
  Magnus reported that a customer with a million tables was finding
  pg_upgrade slow.

 You sure there's not an O(N^2) issue in there somewhere?


  I don't see anything unsafe about having pg_upgrade use
  synchronous_commit=off.

 No objection, but this seems unlikely to be better than linear speedup,
 with a not-terribly-large constant factor.

 BTW, does pg_upgrade run pg_restore in --single-transaction mode?
 That would probably make synchronous_commit moot, at least for that
 step.



It doesn't use pg_restore at all - it uses the dump from pg_dumpall, which
you can't reload with pg_restore.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Mon, Nov 5, 2012 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 BTW, does pg_upgrade run pg_restore in --single-transaction mode?
 That would probably make synchronous_commit moot, at least for that
 step.

 It doesn't use pg_restore at all - it uses the dump from pg_dumpall, which
 you can't reload with pg_restore.

Sorry, I should've said psql --single-transaction.  Although that isn't
going to work either given the presence of \connect commands in the
script.  I wonder whether pg_dumpall ought to have some sort of one
transaction per database please option.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Prefetch index pages for B-Tree index scans

2012-11-05 Thread Bruce Momjian
On Fri, Nov  2, 2012 at 09:59:08AM -0400, John Lumby wrote:
 Thanks for the mentioning this posting.    Interesting.
 However,    the OP describes an implementation based on libaio. 
 Today what we have (for linux) is librt,  which is quite different.
 It is arguable worse than libaio (well actually I am sure it is worse)
 since it is essentially just an encapsulation of using threads to do
 synchronous ios  -  you can look at it as making it easier to do what the 
 application could do itself if it set up its own pthreads. The linux
 kernel does not know about it and so the CPU overhead of checking for
 completion is higher.

Well, good thing we didn't switch to using libaio, now that it is gone.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doc patch, distinguish sections with an empty row in error code table

2012-11-05 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Oct 26, 2012 at 10:03 AM, Karl O. Pinc k...@meme.com wrote:
 This patch adds an empty row before each section header
 in the error codes table in the docs.

 This doesn't seem like a particularly good idea to me, but what do
 other people think?

It seems like a kluge.  If the vertical spacing isn't nice looking, the
place to fix that is in the stylesheet or formatting macros, not by
hacking table contents.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Timing Events

2012-11-05 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 Do we have something to add to auto_explain?

 Well, to be frank, I've never found auto-explain to be useful because of
 its restriction to superuser sessions.  It's an interesting
 proof-of-concept, but completely useless at any production site.

Huh?  The typical use-case is to enable it for all sessions by
including it in shared_preload_libraries.  That doesn't require any
particular session to be superuser.  (If you're superuser you can then
turn it *off* in your session, should you wish.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 03:30:32PM -0500, Tom Lane wrote:
 Magnus Hagander mag...@hagander.net writes:
  On Mon, Nov 5, 2012 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  BTW, does pg_upgrade run pg_restore in --single-transaction mode?
  That would probably make synchronous_commit moot, at least for that
  step.
 
  It doesn't use pg_restore at all - it uses the dump from pg_dumpall, which
  you can't reload with pg_restore.
 
 Sorry, I should've said psql --single-transaction.  Although that isn't
 going to work either given the presence of \connect commands in the
 script.  I wonder whether pg_dumpall ought to have some sort of one
 transaction per database please option.

pg_dumpall is already doing lots of gymnastics with SQL, and pg_upgrade
splits the output file into db/user creation and object creation, so I
am hesitant to add anything more in there.

I was surprised by the scale of the performance improvement, but a
simple table creation test confirmed that improvement, irregardless of
pg_upgrade.  Perhaps we should suggest synchronous_commit=off for
pg_dumpall restores, particularly when using --schema-only.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Timing Events

2012-11-05 Thread Jeff Janes
On Sun, Nov 4, 2012 at 1:35 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 2012/11/4 Satoshi Nagayasu sn...@uptime.jp:


 Do we have something to add to auto_explain?

 Now I am working on expanding slow query record and auto_explain with
 some locking times (lock on objects, lock on enhancing pages, other
 locks).

But this would only work if you used 'auto_explain.log_analyze=1',
which is has nasty performance implications.  Or you planning on
changing the way log_analyze works to get around this?


 Just statement time produces too less information in our complex and
 unpredictable cloud environment with thousand databases and hundreds
 servers.

I think it would be easier to implement but still a big step forward
over what we currently have if explain analyze and \timing would
show the 'rusage' values in addition to the wall-clock time.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Magnus Hagander
On Mon, Nov 5, 2012 at 9:49 PM, Bruce Momjian br...@momjian.us wrote:

 On Mon, Nov  5, 2012 at 03:30:32PM -0500, Tom Lane wrote:
  Magnus Hagander mag...@hagander.net writes:
   On Mon, Nov 5, 2012 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
   BTW, does pg_upgrade run pg_restore in --single-transaction mode?
   That would probably make synchronous_commit moot, at least for that
   step.
 
   It doesn't use pg_restore at all - it uses the dump from pg_dumpall,
 which
   you can't reload with pg_restore.
 
  Sorry, I should've said psql --single-transaction.  Although that isn't
  going to work either given the presence of \connect commands in the
  script.  I wonder whether pg_dumpall ought to have some sort of one
  transaction per database please option.

 pg_dumpall is already doing lots of gymnastics with SQL, and pg_upgrade
 splits the output file into db/user creation and object creation, so I
 am hesitant to add anything more in there.


What about running pg_dump in a loop instead of pg_dumpall?



-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 10:01:22PM +0100, Magnus Hagander wrote:
 On Mon, Nov 5, 2012 at 9:49 PM, Bruce Momjian br...@momjian.us wrote:
 
 On Mon, Nov  5, 2012 at 03:30:32PM -0500, Tom Lane wrote:
  Magnus Hagander mag...@hagander.net writes:
   On Mon, Nov 5, 2012 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
   BTW, does pg_upgrade run pg_restore in --single-transaction mode?
   That would probably make synchronous_commit moot, at least for that
   step.
 
   It doesn't use pg_restore at all - it uses the dump from pg_dumpall,
 which
   you can't reload with pg_restore.
 
  Sorry, I should've said psql --single-transaction.  Although that isn't
  going to work either given the presence of \connect commands in the
  script.  I wonder whether pg_dumpall ought to have some sort of one
  transaction per database please option.
 
 pg_dumpall is already doing lots of gymnastics with SQL, and pg_upgrade
 splits the output file into db/user creation and object creation, so I
 am hesitant to add anything more in there.
 
 
 What about running pg_dump in a loop instead of pg_dumpall?

Well, I could cetainly do pg_dumpall --globals-only, and then I have to
create a pg_dump file for every database, and then add the \connect in
there;  it just seemed easier to use pg_dumpall, though the file split
thing is certainly something I would like to get rid of.

I think I used pg_dumpall because it was an existing tool that I assumed
would be maintained to dump a full cluster.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Jeff Janes
On Mon, Nov 5, 2012 at 12:49 PM, Bruce Momjian br...@momjian.us wrote:
 On Mon, Nov  5, 2012 at 03:30:32PM -0500, Tom Lane wrote:
 Magnus Hagander mag...@hagander.net writes:
  On Mon, Nov 5, 2012 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  BTW, does pg_upgrade run pg_restore in --single-transaction mode?
  That would probably make synchronous_commit moot, at least for that
  step.

  It doesn't use pg_restore at all - it uses the dump from pg_dumpall, which
  you can't reload with pg_restore.

 Sorry, I should've said psql --single-transaction.  Although that isn't
 going to work either given the presence of \connect commands in the
 script.  I wonder whether pg_dumpall ought to have some sort of one
 transaction per database please option.

 pg_dumpall is already doing lots of gymnastics with SQL, and pg_upgrade
 splits the output file into db/user creation and object creation, so I
 am hesitant to add anything more in there.

 I was surprised by the scale of the performance improvement, but a
 simple table creation test confirmed that improvement, irregardless of
 pg_upgrade.  Perhaps we should suggest synchronous_commit=off for
 pg_dumpall restores, particularly when using --schema-only.

Or have options for pg_dump and pg_restore to insert set
synchronous_commit=off into the SQL stream?

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 Or have options for pg_dump and pg_restore to insert set
 synchronous_commit=off into the SQL stream?

It would be kind of neat if we had a command that would force all
previously-asynchronous commits to complete.  It seems likely that
very, very few people would care about intermediate pg_dump states, so
we could do the whole dump asynchronously and then do FORCE ALL
COMMITS; or whatever at the end.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doc patch, distinguish sections with an empty row in error code table

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Fri, Oct 26, 2012 at 10:03 AM, Karl O. Pinc k...@meme.com wrote:
 This patch adds an empty row before each section header
 in the error codes table in the docs.

 This doesn't seem like a particularly good idea to me, but what do
 other people think?

 It seems like a kluge.  If the vertical spacing isn't nice looking, the
 place to fix that is in the stylesheet or formatting macros, not by
 hacking table contents.

That was my thought as well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous commit not... synchronous?

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 2:59 PM, Daniel Farina dan...@heroku.com wrote:
 On Sun, Nov 4, 2012 at 6:00 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Nov 3, 2012 at 5:44 PM, Florian Weimer f...@deneb.enyo.de wrote:
 * Daniel Farina:
 The idea of canceling a COMMIT statement causing a COMMIT seems pretty
 strange to me.

 Canceling commits is inherently racy, so I'm not sure if this behavior
 so strange after all.

 Yeah.  You can't make the local fsync() and the remote fsync() happen
 at exactly the same moment in time.  No implementation can do that,
 anywhere, ever.  Our implementation happens to require the local
 fsync() to always be done first.

 I don't think there is a (unachievable) requirement of simultaneous
 flush, only that two machines have flushed (or met whatever durability
 criteria) strictly more than the position of the commit in question.
 This mean some changes are written to some place once, but
 acknowledging commit requires proof of two-safety.

Right, but what you're complaining about is that you can't cancel the
transaction after beginning to make it 2-safe.

 I can see how in some corner cases this might cause orphaning of
 synchronous standbys that write, but cannot acknowledge.

 If the point of synchronous commit is to reach exact two-safety by
 waiting a while for other agents to process data, it would seem that
 the current model could use some less-invasive tweaking, as-is one can
 succeed in an unbounded number of commits in a degenerate case.

Well, feel free to make a suggestion.  We could have a mode where a
commit, once initiated, is not user-cancellable, but that doesn't seem
like a usability improvement to me.  That just forces somebody to
bounce the server in a situation where it isn't necessary.  The
warning is not unclear about what has happened.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Andrew Dunstan


On 11/05/2012 01:53 PM, Magnus Hagander wrote:


On Mon, Nov 5, 2012 at 7:50 PM, Andrew Dunstan and...@dunslane.net 
mailto:and...@dunslane.net wrote:



On 11/05/2012 12:13 PM, Magnus Hagander wrote:



http://www.pgbuildfarm.org/cgi-bin/show_status.pl

...it seems there are LOTS of machines building with krb5, and
NONE with gssapi.



AFAICS there is no icon for gssapi. So your first statement is
correct, but the second one isn't.




If someone would like to give me an icon I'll add it.


Well, if we're removing krb5 we could reuse that one :)

And no, I don't have any good ideas icon-wise to distinct gssapi from 
krb5...






OK, I have added one - it's the same as krb5 but red.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 04:14:47PM -0500, Robert Haas wrote:
 On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes jeff.ja...@gmail.com wrote:
  Or have options for pg_dump and pg_restore to insert set
  synchronous_commit=off into the SQL stream?
 
 It would be kind of neat if we had a command that would force all
 previously-asynchronous commits to complete.  It seems likely that
 very, very few people would care about intermediate pg_dump states, so
 we could do the whole dump asynchronously and then do FORCE ALL
 COMMITS; or whatever at the end.

Actually, I had assumed that a session disconnection forced a WAL fsync
flush, but now I doubt that.  Seems only server shutdown does that, or a
checkpoint.  Would this work?

SET synchronous_commit=on;
CREATE TABLE dummy(x int);
DROP TABLE dummy;

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Jeff Janes
On Mon, Nov 5, 2012 at 1:14 PM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 Or have options for pg_dump and pg_restore to insert set
 synchronous_commit=off into the SQL stream?

 It would be kind of neat if we had a command that would force all
 previously-asynchronous commits to complete.  It seems likely that
 very, very few people would care about intermediate pg_dump states, so
 we could do the whole dump asynchronously and then do FORCE ALL
 COMMITS; or whatever at the end.

Yeah, I was wondering what a fool-proof way of doing that would be,
without implementing a new feature.  Turning synchronous_commits back
on and then doing and committing a transaction guaranteed to generate
WAL would do it.

Would a simple 'select pg_switch_xlog();' always accomplish the desired flush?

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table tablename add column - breaks pl/pgsql function returns tablename

2012-11-05 Thread Tom Lane
Palle Girgensohn gir...@pingpong.net writes:
 Please note that this problem does not go away by disconnecting and 
 reconnecting, and other sessions get the error immediately, so the claim that 
 it is bound to a session is false. 

Huh?  The test case you provided certainly doesn't exhibit any such
behavior.  I get

regression=# SELECT * FROM test_func();
ERROR:  wrong record type supplied in RETURN NEXT
CONTEXT:  PL/pgSQL function test_func() line 6 at RETURN NEXT
regression=# \c -
You are now connected to database regression as user postgres.
regression=# SELECT * FROM test_func();
 id | foo 
+-
  1 |
(1 row)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 01:23:58PM -0800, Jeff Janes wrote:
 On Mon, Nov 5, 2012 at 1:14 PM, Robert Haas robertmh...@gmail.com wrote:
  On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes jeff.ja...@gmail.com wrote:
  Or have options for pg_dump and pg_restore to insert set
  synchronous_commit=off into the SQL stream?
 
  It would be kind of neat if we had a command that would force all
  previously-asynchronous commits to complete.  It seems likely that
  very, very few people would care about intermediate pg_dump states, so
  we could do the whole dump asynchronously and then do FORCE ALL
  COMMITS; or whatever at the end.
 
 Yeah, I was wondering what a fool-proof way of doing that would be,
 without implementing a new feature.  Turning synchronous_commits back
 on and then doing and committing a transaction guaranteed to generate
 WAL would do it.
 
 Would a simple 'select pg_switch_xlog();' always accomplish the desired flush?

That could generate a lot of WAL files if used regularly.  :-(  Does
SELECT txid_current() generate WAL?  I think it does.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Alvaro Herrera
Bruce Momjian escribió:
 On Mon, Nov  5, 2012 at 04:14:47PM -0500, Robert Haas wrote:
  On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes jeff.ja...@gmail.com wrote:
   Or have options for pg_dump and pg_restore to insert set
   synchronous_commit=off into the SQL stream?
  
  It would be kind of neat if we had a command that would force all
  previously-asynchronous commits to complete.  It seems likely that
  very, very few people would care about intermediate pg_dump states, so
  we could do the whole dump asynchronously and then do FORCE ALL
  COMMITS; or whatever at the end.
 
 Actually, I had assumed that a session disconnection forced a WAL fsync
 flush, but now I doubt that.  Seems only server shutdown does that, or a
 checkpoint.  Would this work?
 
   SET synchronous_commit=on;
   CREATE TABLE dummy(x int);
   DROP TABLE dummy;

AFAIR any transaction that modifies catalogs gets sync commit forcibly,
regardless of the setting.  And sync commit means you get to wait for
all previous transactions to be flushed as well.  So simply creating a
temp table ought to do the trick ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] install zic binary

2012-11-05 Thread Bruce Momjian
On Tue, Oct 23, 2012 at 12:40:54PM -0200, Euler Taveira wrote:
 Hi,
 
 Every year we have a ton of questions about updating the time zone data in
 Brazil (our politics decided to do it at 90min at the second half). Problem is
 that there is not sufficient time to release a new minor version with an
 updated time zone data. It is not a problem for *nix because zic binary is
 available (even without --with-system-tzdata option) but on Windows, you are
 hosed (you have no option but build source code). Should we have pg_zic?
 
 Sometimes we need to update time zone data but can't upgrade. Why?
 
 * you're stacked in an unsupported version;
 * your code is relying on a strange behavior that was changed in a minor 
 version;
 * a new minor version was released hours ago but politics decided to change
 timezone in a hurry (before a new minor version). It happened in my state
 (Tocantins) last week -- up to 48 hours before starting DST, we don't know if
 we're in or out DST. That's because the governor decided (without consulting
 the population) to be in but when people said 'no', he stepped back and
 requested the president to be out; that was too late.
 
 
 [1] http://mm.icann.org/pipermail/tz/2012-October/018347.html

Not sure we can adjust our process to match every politician's actions.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 06:33:16PM -0300, Alvaro Herrera wrote:
 Bruce Momjian escribió:
  On Mon, Nov  5, 2012 at 04:14:47PM -0500, Robert Haas wrote:
   On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Or have options for pg_dump and pg_restore to insert set
synchronous_commit=off into the SQL stream?
   
   It would be kind of neat if we had a command that would force all
   previously-asynchronous commits to complete.  It seems likely that
   very, very few people would care about intermediate pg_dump states, so
   we could do the whole dump asynchronously and then do FORCE ALL
   COMMITS; or whatever at the end.
  
  Actually, I had assumed that a session disconnection forced a WAL fsync
  flush, but now I doubt that.  Seems only server shutdown does that, or a
  checkpoint.  Would this work?
  
  SET synchronous_commit=on;
  CREATE TABLE dummy(x int);
  DROP TABLE dummy;
 
 AFAIR any transaction that modifies catalogs gets sync commit forcibly,
 regardless of the setting.  And sync commit means you get to wait for

Uh, I am not seeing that my testing because I was only doing CREATE
TABLE and it was affected by the synchronous_commit value.

 all previous transactions to be flushed as well.  So simply creating a
 temp table ought to do the trick ...

I don't think TEMP tables write to WAL, for performance reasons.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 4:33 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 AFAIR any transaction that modifies catalogs gets sync commit forcibly,
 regardless of the setting.  And sync commit means you get to wait for
 all previous transactions to be flushed as well.  So simply creating a
 temp table ought to do the trick ...

I don't think there's a carve-out for system tables ... but creating a
temp table with synchronous_commit=on will certainly do the trick.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 04:39:27PM -0500, Robert Haas wrote:
 On Mon, Nov 5, 2012 at 4:33 PM, Alvaro Herrera alvhe...@2ndquadrant.com 
 wrote:
  AFAIR any transaction that modifies catalogs gets sync commit forcibly,
  regardless of the setting.  And sync commit means you get to wait for
  all previous transactions to be flushed as well.  So simply creating a
  temp table ought to do the trick ...
 
 I don't think there's a carve-out for system tables ... but creating a
 temp table with synchronous_commit=on will certainly do the trick.

What is a temp table writing to WAL?  The pg_class/pg_attribute changes?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Robert Haas
On Mon, Nov 5, 2012 at 4:42 PM, Bruce Momjian br...@momjian.us wrote:
 On Mon, Nov  5, 2012 at 04:39:27PM -0500, Robert Haas wrote:
 On Mon, Nov 5, 2012 at 4:33 PM, Alvaro Herrera alvhe...@2ndquadrant.com 
 wrote:
  AFAIR any transaction that modifies catalogs gets sync commit forcibly,
  regardless of the setting.  And sync commit means you get to wait for
  all previous transactions to be flushed as well.  So simply creating a
  temp table ought to do the trick ...

 I don't think there's a carve-out for system tables ... but creating a
 temp table with synchronous_commit=on will certainly do the trick.

 What is a temp table writing to WAL?  The pg_class/pg_attribute changes?

Yes.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Magnus Hagander
On Mon, Nov 5, 2012 at 10:21 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 11/05/2012 01:53 PM, Magnus Hagander wrote:


 On Mon, Nov 5, 2012 at 7:50 PM, Andrew Dunstan and...@dunslane.netmailto:
 and...@dunslane.net wrote:


 On 11/05/2012 12:13 PM, Magnus Hagander wrote:



 
 http://www.pgbuildfarm.org/**cgi-bin/show_status.plhttp://www.pgbuildfarm.org/cgi-bin/show_status.pl

 ...it seems there are LOTS of machines building with krb5, and
 NONE with gssapi.



 AFAICS there is no icon for gssapi. So your first statement is
 correct, but the second one isn't.




 If someone would like to give me an icon I'll add it.


 Well, if we're removing krb5 we could reuse that one :)

 And no, I don't have any good ideas icon-wise to distinct gssapi from
 krb5...




 OK, I have added one - it's the same as krb5 but red.


Thanks.

Is there something we can do to get more animals to build with it by
default, or is that something that each individual animal-owner has to
change?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Andres Freund
On Mon, Nov 05, 2012 at 04:42:56PM -0500, Bruce Momjian wrote:
 On Mon, Nov  5, 2012 at 04:39:27PM -0500, Robert Haas wrote:
  On Mon, Nov 5, 2012 at 4:33 PM, Alvaro Herrera alvhe...@2ndquadrant.com 
  wrote:
   AFAIR any transaction that modifies catalogs gets sync commit forcibly,
   regardless of the setting.  And sync commit means you get to wait for
   all previous transactions to be flushed as well.  So simply creating a
   temp table ought to do the trick ...

SET synchronous_commit = on;
SELECT txid_current();

Should be enough.

  I don't think there's a carve-out for system tables ... but creating a
  temp table with synchronous_commit=on will certainly do the trick.

 What is a temp table writing to WAL?  The pg_class/pg_attribute changes?

Yes.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Jeff Janes
On Mon, Nov 5, 2012 at 1:39 PM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Nov 5, 2012 at 4:33 PM, Alvaro Herrera alvhe...@2ndquadrant.com 
 wrote:
 AFAIR any transaction that modifies catalogs gets sync commit forcibly,
 regardless of the setting.  And sync commit means you get to wait for
 all previous transactions to be flushed as well.  So simply creating a
 temp table ought to do the trick ...

 I don't think there's a carve-out for system tables ... but creating a
 temp table with synchronous_commit=on will certainly do the trick.

But that seems like something that might be optimized away in the
future (for example, so that temp tables can be used on hot standbys)
resulting in action-at-a-distance breakage.

Is txid_current() more fundamental, i.e. less likely to change?

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecations in authentication

2012-11-05 Thread Andrew Dunstan


On 11/05/2012 04:54 PM, Magnus Hagander wrote:
On Mon, Nov 5, 2012 at 10:21 PM, Andrew Dunstan and...@dunslane.net 
mailto:and...@dunslane.net wrote:



On 11/05/2012 01:53 PM, Magnus Hagander wrote:


On Mon, Nov 5, 2012 at 7:50 PM, Andrew Dunstan
and...@dunslane.net mailto:and...@dunslane.net
mailto:and...@dunslane.net mailto:and...@dunslane.net wrote:


On 11/05/2012 12:13 PM, Magnus Hagander wrote:



http://www.pgbuildfarm.org/cgi-bin/show_status.pl

...it seems there are LOTS of machines building
with krb5, and
NONE with gssapi.



AFAICS there is no icon for gssapi. So your first
statement is
correct, but the second one isn't.




If someone would like to give me an icon I'll add it.


Well, if we're removing krb5 we could reuse that one :)

And no, I don't have any good ideas icon-wise to distinct
gssapi from krb5...




OK, I have added one - it's the same as krb5 but red.


Thanks.

Is there something we can do to get more animals to build with it by 
default, or is that something that each individual animal-owner has to 
change?



Well, I can add change the defaults in the sample config file which will 
be picked up in the new release later this week. And we can ask existing 
owners on the owners' mailing list.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table tablename add column - breaks pl/pgsql function returns tablename

2012-11-05 Thread Palle Girgensohn


5 nov 2012 kl. 19:36 skrev Robert Haas robertmh...@gmail.com:

 On Thu, Nov 1, 2012 at 12:14 AM, Amit kapila amit.kap...@huawei.com wrote:
 Is this very hard to fix?
 
   Currently the compiled body is not discarded on DDL's, so I believe it is 
 not a bug as per current implementation.
   However it can be thought of as a new feature.
 
 Seems like a bug to me.
 

Please note that this problem does not go away by disconnecting and 
reconnecting, and other sessions get the error immediately, so the claim that 
it is bound to a session is false. 

The work-around I use now is to create or replace function ... with a 
verbatim copy of what is already the defined function. Seems stupid to me, I 
agree it seems like a bug. 


 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2012-11-05 Thread Dimitri Fontaine
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 FOR NON KEY UPDATE
 FOR KEY UPDATE
 
 KEY is the default, so FOR UPDATE is a synonym of FOR KEY UPDATE

 Not really sure about the proposed syntax, but yes clearly we need some
 other syntax to mean FOR NON KEY UPDATE.  I would rather keep FOR
 UPDATE to mean what I currently call FOR KEY UPDATE.  More proposals for
 the other (weaker) lock level welcome (but if you love FOR NON KEY
 UPDATE, please chime in too)

FOR ANY UPDATE, synonym of FOR UPDATE
FOR KEY UPDATE, optimized version, when it applies to your case

I also tend to think that we should better not change the current
meaning of FOR UPDATE and have it default to FOR ANY UPDATE.

Unless it's easy to upgrade from ANY to KEY, and do that automatically
at the right time, but I fear there lie dragons (or something).

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous commit not... synchronous?

2012-11-05 Thread Daniel Farina
On Mon, Nov 5, 2012 at 1:19 PM, Robert Haas robertmh...@gmail.com wrote:
 Well, feel free to make a suggestion.  We could have a mode where a
 commit, once initiated, is not user-cancellable, but that doesn't seem
 like a usability improvement to me.  That just forces somebody to
 bounce the server in a situation where it isn't necessary.  The
 warning is not unclear about what has happened.

Yeah, I'm not quite so far as thinking about the best way (much less
any way) of solving the problem, only so far as it's definitely
possible to successfully commit as much as you want, in violation of
2-safety, syncrep setting or no, and that seems like an interesting
violation of an invariant one might presume.

The warning is there, but it does render the feature a more fragile
for exposing through the very thin channel one has when dealing with
database users at arm's length, as I must.  Only so many caveats and
fine print can be shoved to the user -- this is why, for example,
support of pooling has been a heady proposition for me.

I think this is still in the realm of brain-food, since there is no
obvious model to fix this in sight.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What are the advantages of not being able to access multiple databases with one connection?

2012-11-05 Thread Josh Berkus

 functional gap here.  I am not sure I'd bother implementing the
 multi-database concept today if we didn't have it already ... but it
 seems kind of pointless to rip it out given that it's already there.

It's very useful for webhosts.  You can give each user their own private
database and not worry about them hacking into other peoples'.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table tablename add column - breaks pl/pgsql function returns tablename

2012-11-05 Thread Palle Girgensohn


5 nov 2012 kl. 22:23 skrev Tom Lane t...@sss.pgh.pa.us:

 Palle Girgensohn gir...@pingpong.net writes:
 Please note that this problem does not go away by disconnecting and 
 reconnecting, and other sessions get the error immediately, so the claim 
 that it is bound to a session is false.
 
 Huh?  The test case you provided certainly doesn't exhibit any such
 behavior.  I get
 
 regression=# SELECT * FROM test_func();
 ERROR:  wrong record type supplied in RETURN NEXT
 CONTEXT:  PL/pgSQL function test_func() line 6 at RETURN NEXT
 regression=# \c -
 You are now connected to database regression as user postgres.
 regression=# SELECT * FROM test_func();
 id | foo 
 +-
  1 |
 (1 row)
 
regards, tom lane


Ah, sorry. Other sessions get the error immediately as well though. Would input 
parameters matter, or is it just the return type? I'll see if I can find a test 
case that breaks permanently, but I'm probably mistaken about that bit then. 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Jeff Janes
On Mon, Nov 5, 2012 at 12:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Bruce Momjian br...@momjian.us writes:
 Magnus reported that a customer with a million tables was finding
 pg_upgrade slow.

 You sure there's not an O(N^2) issue in there somewhere?

There certainly will be before he gets to a million, but it probably
doesn't show up yet testing at 2000.

He will probably have to hack pg_dump, as discussed here:

http://archives.postgresql.org/pgsql-performance/2012-09/msg3.php



 I don't see anything unsafe about having pg_upgrade use
 synchronous_commit=off.

 No objection, but this seems unlikely to be better than linear speedup,
 with a not-terribly-large constant factor.

 BTW, does pg_upgrade run pg_restore in --single-transaction mode?
 That would probably make synchronous_commit moot, at least for that
 step.

Doing that might make the sync problem better, but would make the N^2
problem worse if upgrading to = 9.2 .

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What are the advantages of not being able to access multiple databases with one connection?

2012-11-05 Thread Merlin Moncure
On Mon, Nov 5, 2012 at 11:33 AM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Oct 30, 2012 at 8:37 AM, crocket crockabisc...@gmail.com wrote:
 MySQL permits a connection to access multiple databases.
 But Postgresql restricts a connection to one database.
 I think postgresql database connection is somewhat limited.

 Is it an old and decrepit design? or does it deserve some appreciations?

 I think it deserves some appreciation.  Each database is completely
 isolated in terms of privileges, which is sometimes useful.  Also, if
 you somehow manage to fry the system catalogs in one database, the
 other ones can still survive.  The role played by databases in MySQL
 is served by schemas in PostgreSQL, so I don't see that there is a
 functional gap here.  I am not sure I'd bother implementing the
 multi-database concept today if we didn't have it already ... but it
 seems kind of pointless to rip it out given that it's already there.

A little trivia: postgres supports full database qualified identifier names:
postgres=# select postgres.public.foo.i from postgres.public.foo;

Even though you can't specify any other database than the one you're in.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table tablename add column - breaks pl/pgsql function returns tablename

2012-11-05 Thread Tom Lane
Palle Girgensohn gir...@pingpong.net writes:
 Ah, sorry. Other sessions get the error immediately as well though. Would 
 input parameters matter, or is it just the return type? I'll see if I can 
 find a test case that breaks permanently, but I'm probably mistaken about 
 that bit then. 

It's not the return value as such that's choking, it's the local
variable.  I believe the issue would appear with any local variable or
parameter of a named composite type.

The general case of this is quite difficult: should we expect that an
ALTER TYPE done (perhaps in some other session) while a function is
running would affect the *current value* of such a local variable?
There's really no practical way to implement that in the current system
structure, and certainly no way to enforce the behavior you get for row
values in regular tables, namely that the ALTER TYPE rolls back if any
row value conversion fails.

However I think we could realistically hope that subsequent function
calls would work with the up-to-date rowtype definition.  My opinion
about how to do that is to stop using the row code path in plpgsql for
values of named composite types, and instead treat them as records;
that is, store a HeapTuple value plus a tuple descriptor (or something
morally equivalent such as a TupleTableSlot) and not break the value up
into a separate plpgsql variable (a/k/a PLpgSQL_datum) per column.  The
fundamental problem here is that doing that bakes the rowtype's column
set into the compiled form of the function.

There was some objection to that in the previous discussion on the
grounds of possible performance loss, but I think that objection is
at best premature; it ignores some salient facts such as
(1) some operations would get faster not slower,
(2) there is scope for performance-improvement efforts,
(3) per the old saying, code can be arbitrarily fast if it doesn't
have to give the right answer.  Objecting to this fix without
proposing a more-workable alternative is useless.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 That could generate a lot of WAL files if used regularly.  :-(  Does
 SELECT txid_current() generate WAL?  I think it does.

Well, it assigns a XID.  I'm not sure it'd be a good idea to assume that
the mere act of doing that, without actually writing anything to tables,
would result in a synchronous commit.  (For example, if the transaction
were to abort not commit, I'm pretty sure we'd not bother to fsync its
abort record.  There might be, today or in the future, a similar
optimization for successful xacts that created no WAL records.)

I thought the idea of creating a temp table was the most robust one.
A regular table would be even more certain to generate an fsync, but
it has the disadvantages that you can't easily guarantee no name
collision against a user table, nor guarantee that the table wouldn't
be left behind after a crash at the wrong instant.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Josh Berkus

 Sorry, I should've said psql --single-transaction.  Although that isn't
 going to work either given the presence of \connect commands in the
 script.  I wonder whether pg_dumpall ought to have some sort of one
 transaction per database please option.

pg_dumpall ought to support -Fc output ...


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Timing Events

2012-11-05 Thread Josh Berkus

 Huh?  The typical use-case is to enable it for all sessions by
 including it in shared_preload_libraries.  That doesn't require any
 particular session to be superuser.  (If you're superuser you can then
 turn it *off* in your session, should you wish.)

It's not practical to have auto-explain on for all queries on a server
which is processing 10K queries/minute.  And non-superusers can't alter
the settings in their session, even the min_duration.

A practical use of auto-explain would involve during it on for a single
user session, or for a specific database user, neither of which are
possible.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Bruce Momjian
On Mon, Nov  5, 2012 at 05:39:40PM -0800, Josh Berkus wrote:
 
  Sorry, I should've said psql --single-transaction.  Although that isn't
  going to work either given the presence of \connect commands in the
  script.  I wonder whether pg_dumpall ought to have some sort of one
  transaction per database please option.
 
 pg_dumpall ought to support -Fc output ...

That is already a TODO:

Add pg_dumpall custom format dumps?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade speed for many tables

2012-11-05 Thread Andrew Dunstan


On 11/05/2012 08:52 PM, Bruce Momjian wrote:

On Mon, Nov  5, 2012 at 05:39:40PM -0800, Josh Berkus wrote:

Sorry, I should've said psql --single-transaction.  Although that isn't
going to work either given the presence of \connect commands in the
script.  I wonder whether pg_dumpall ought to have some sort of one
transaction per database please option.

pg_dumpall ought to support -Fc output ...

That is already a TODO:

Add pg_dumpall custom format dumps?



That '?' isn't an accident. Custom format is currently inherently 
single-database. Unless you're going to make pg_dumpall produce multiple 
custom format archives, that would involve a major change that nobody 
has designed AFAIK.



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical to physical page mapping

2012-11-05 Thread Bruce Momjian
On Mon, Oct 29, 2012 at 07:05:39AM -0400, Robert Haas wrote:
 Yet another idea we've tossed around is to make only vacuum records
 include FPWs, and have the more common heap insert/update/delete
 operations include enough information that they can still be applied
 correctly even if the page has been torn by the previous replay of
 such a record.  This would involve modifying the recovery algorithm so
 that, until consistency is reached, we replay all records, regardless
 of LSN, which would cost some extra I/O, but maybe not too much to
 live with?  It would also require that, for example, a heap-insert
 record mention the line pointer index used for the insertion;
 currently, we count on the previous state of the page to tell us that.
  For checkpoint cycles of reasonable length, the cost of storing the
 line pointer in every WAL record seems like it'll be less than the
 cost needing to write an FPI for the page once per checkpoint cycle,
 but this isn't certain to be the case for all workloads.

This last idea has the most promise for me.  Vacuum is far less common
than row modification writes.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doc patch, distinguish sections with an empty row in error code table

2012-11-05 Thread Karl O. Pinc
On 11/05/2012 02:40:12 PM, Tom Lane wrote:
 Robert Haas robertmh...@gmail.com writes:
  On Fri, Oct 26, 2012 at 10:03 AM, Karl O. Pinc k...@meme.com 
 wrote:
  This patch adds an empty row before each section header
  in the error codes table in the docs.
 
  This doesn't seem like a particularly good idea to me, but what do
  other people think?
 
 It seems like a kluge.  If the vertical spacing isn't nice looking,
 the
 place to fix that is in the stylesheet or formatting macros, not by
 hacking table contents.

The attached patch, errorcode_table_v2.patch, is an attempt to
do it the right way.

IMO the right way, instead of attempting to manually style
table data rows into column headers as the code presently does, 
is to use multiple tgroups, each
of which has it's own header.  This results in
a single table, with multiple sub-tables
and lets the toolchain style the headers as 
appropriate.  Using multiple tgroups is the
approach taken in this patch.

In my very limited experience alternate output formats,
formats other than html (like PDF), output tables containing
multiple tgroups format sensibly, the output is as a single table
with sub-headings and consistent column widths throughout.
So, this new patch does not attempt to apply any additional
style to the multi-tgroup table for non-html formats.

However, this new patch does not work for html.  Or, rather, 
it works but produces ugly html output.  The html output 
consists of multiple tables, one for each tgroup, each of which, 
by default, might be (and is) a different width.

Ugly.

Normally this can be fixed by styling the html with css.
However in this case there are 2 problems.  The first is
the serious one.

I'd expect to be able to set %entry-propagates-style%,
or at least %phrase-propagates-style%, and use a role=style
attribute to style all the columns to a consistent width.
But neither %entry-propagates-style% nor
%phrase-propagates-style% works, so the styling info
never makes it through the toolchain into
the html.

(The patch as submitted to you attempts to use
%phrase-propagates-style%.  I believe this
is true by default and I shouldn't have to
set it in the stylesheet.dsl, but there's
code in this patch to do this anyway.)

I don't know why the styling does not make it
through to the html.  There's dsssl stylesheets
on my box that purport to handle %phrase-propagates-style%,
but I didn't check to be sure that these were the stylesheets
actually in use.  Perhaps the docbook version used by the pg
docs is not new enough to use a stylesheet which supports
this.  I don't think the custom navbar heading styling
done in stylesheet.dsl is the problem, but I suppose it's
possible.  Or maybe I've a typo.  In short, I've no clue 
why this patch fails.

(Someday I imagine that pg will want to move to Docbook 5,
and I don't even know if there's dsssl stylesheets for
Docbook 5.)

I could hack the doc/src/sgml/stylesheet.dsl file and
put in the code that processes %phrase-propagates-style%,
but that seems a little crazy.

The second problem is that there's a make variable
(STYLE=website) which causes the generated html to use
the css found on the pg website.  This patch does not
frob the website's css so even if this patch otherwise
worked somebody would have to change the pg website's
css.

So at this point I'm out of ideas.  Unless somebody
can chime in with a clue I'm ready to give up.

It might be possible to improve the look of the
current output by throwing some xsl/dsssl styling at the
box edges of the faked table headers or do something
else along those lines.  IMHO the right way forward
is to get tgroups working.

Regards,

Karl k...@meme.com
Free Software:  You don't pay back, you pay forward.
 -- Robert A. Heinlein

diff --git a/doc/src/sgml/errcodes.sgml b/doc/src/sgml/errcodes.sgml
index 16cb6c7..2388daf 100644
--- a/doc/src/sgml/errcodes.sgml
+++ b/doc/src/sgml/errcodes.sgml
@@ -53,24 +53,8 @@
 table id=errcodes-table
  titleproductnamePostgreSQL/productname Error Codes/title
 
- tgroup cols=2
-  colspec colnum=1 colname=errorcode
-  colspec colnum=2 colname=condname
-  spanspec namest=errorcode nameend=condname spanname=span12
+  errcodes-table;
 
-  thead
-   row
-entryError Code/entry
-entryCondition Name/entry
-   /row
-  /thead
-
-  tbody
-
-errcodes-table;
-
-  /tbody
- /tgroup
 /table
 
 
diff --git a/doc/src/sgml/generate-errcodes-table.pl b/doc/src/sgml/generate-errcodes-table.pl
index b9c14d3..91cda36 100644
--- a/doc/src/sgml/generate-errcodes-table.pl
+++ b/doc/src/sgml/generate-errcodes-table.pl
@@ -6,11 +6,51 @@
 use warnings;
 use strict;
 
+sub start_tgroup($) {
+	my $sname = shift;
+
+	print 'EOF';
+
+
+ tgroup cols=2
+  colspec colnum=1 colname=errorcode
+  colspec colnum=2 colname=condname
+  spanspec namest=errorcode nameend=condname spanname=span12
+
+  thead
+row
+  entry spanname=span12
+EOF
+
+	print $sname/entry\n;
+
+	print 'EOF';
+   /row
+   row
+

Re: [HACKERS] Arguments to foreign tables?

2012-11-05 Thread Jeff Davis
On Sun, 2012-11-04 at 15:13 -0500, Tom Lane wrote:
 Jeff Davis pg...@j-davis.com writes:
  Is there any fundamental or philosophical reason why a foreign table
  can't accept arguments?
 
 That isn't a table; it's some sort of function.  Now that we have
 LATERAL, there is no good reason to contort SQL's syntax and semantics
 in the direction you suggest.

Maybe I should rephrase this as a problem with SRFs: you don't get to
define the init/exec/end executor functions, and you don't get access to
the optimizer information.

It seems like foreign tables are a better mechanism (except for the
simple cases where you don't care about the details), and the only thing
an SRF can do that a foreign table can't is accept arguments. So, I
thought maybe it would make more sense to combine the mechanisms
somehow.

Take something as simple as generate_series: right now, it materializes
the entire thing if it's in the FROM clause, but it wouldn't need to if
it could use the foreign table mechanism.

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statistics and selectivity estimation for ranges

2012-11-05 Thread Jeff Davis
On Mon, 2012-11-05 at 11:12 -0300, Alvaro Herrera wrote:
 What's going on with this patch?  I haven't seen any activity in a
 while.  Should I just move this to the next commitfest?

Sorry, I dropped the ball here. I will still review it, whether it makes
this commitfest or not.

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update obsolete text in indexam.sgml

2012-11-05 Thread Etsuro Fujita
 From: Tom Lane [mailto:t...@sss.pgh.pa.us]

 Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
  ISTM it would be better to update the text about index cost estimation in
  indexam.sgml.  Please find attached a patch.

 I'm not too thrilled with the proposed patch.  In the first place, I
 don't think it's necessary to address costing of index order-by
 expressions in an introductory explanation.

Agreed.

 In the second, this change
 makes the code less clear, not more so, because it introduces a variable
 indexQuals without showing where you would get that value from.

Agreed.  However, I am concerned about the next comment in the current code:

/*
 * Our generic assumption is that the index pages will be read
 * sequentially, so they cost seq_page_cost each, not random_page_cost.
 * ...

I think this assumption is completely wrong, which has given me a motivation to
propose a patch, though I am missing something.

Thanks,

Best regards,
Etsuro Fujita




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers