Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Jeff Davis
On Tue, 2011-10-11 at 13:22 -0400, Robert Haas wrote:
 The real issue is that the costing estimates need to be accurate, and
 that's where the rubber hits the road.  Otherwise, even if we pick the
 right way to scan the table, we may do silly things up the line when
 we go to start constructing the join order.  I think we need to beef
 up ANALYZE to gather statistics on the fraction of the pages that are
 marked all-visible, or maybe VACUUM should gather that information.
 The trouble is that if we VACUUM and then ANALYZE, we'll often get
 back a value very close to 100%, but then the real value may diminish
 quite a bit before the next auto-analyze fires.  I think if we can
 figure out what to do about that problem we'll be well on our way...

Can you send stats messages to keep track when you unset a bit in the
VM? That might allow it to be more up-to-date.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: casts row to array and array to row

2011-10-12 Thread Pavel Stehule
2011/10/11 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 2011/10/11 Robert Haas robertmh...@gmail.com:
 On Tue, Oct 11, 2011 at 4:40 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 What do you think about this idea?

 It's a bad one.

 Well, a ROW can contain values of different types; an ARRAY can't.

 yes, I know - but it should be problem only in few cases - when is not
 possible to cast a row field to array field.

 This idea is basically the same as data types don't matter, which is
 not SQL-ish and certainly not Postgres-ish.

This proposal is not about this. The data types are important and I
don't propose a universal data type or some automatic datatype. Result
of cast op has know type defined in planner time.

Proposal is more about respect to datatypes than now. A some row based
operations are based on serialization and deserialization to text.
This is in PLPerl or PLpgSQL, on user level or system level. When you
have to do some task, then you have to solve quoting, NULL
replacement, ... Casts between array and rows just remove these ugly
hacks - so work can be faster and more robust (without string
operations (when is possible) and without quoting string ops at
least).

unfortunately I am not able to solve these requests on custom
functions level, because I can't to specify a target type from
function (I am missing a some polymorphic type like anytype).

Regards

Pavel Stehule


                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online base backup from the hot-standby

2011-10-12 Thread Jun Ishiduka

  Some testing notes
  --
  select pg_start_backup('x');
  ERROR: full_page_writes on master is set invalid at least once since
  latest checkpoint
  
  I think this error should be rewritten as
  ERROR: full_page_writes on master has been off at some point since
  latest checkpoint
  
  We should be using 'off' instead of 'invalid' since that is what is what
  the user sets it to.
 
 Sure.


  Minor typo above at 'CHECKPOTNT'
 
 Yes.


I updated to patch corresponded above-comments.

Regards.



Jun Ishizuka
NTT Software Corporation
TEL:045-317-7018
E-Mail: ishizuka@po.ntts.co.jp



standby_online_backup_09base-03fpw.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhead cost of Serializable Snapshot Isolation

2011-10-12 Thread Peter Eisentraut
On tis, 2011-10-11 at 21:50 +0100, Simon Riggs wrote:
 I'm keen to ensure people enjoy the possibility of upgrading to the
 latest release. The continual need to retest applications mean that
 very few users upgrade quickly or with anywhere near the frequency
 with which we put out new releases. What is the point of rushing out
 software that nobody can use? pg_upgrade doesn't change your
 applications, so there isn't a fast path to upgrade in the way you
 seem to think.

This is a valid concern, which I share, but I think adding a few
configuration parameters of the nature, this setting really means what
this setting meant in the old release is only the tip of the iceberg.
Ensuring full compatibility between major releases would require an
extraordinary amount of effort, including a regression test suite that
would be orders of magnitude larger than what we currently have.  I
frankly don't see the resources to do that.

The workaround strategy is that we maintain backbranches, so that users
are not forced to upgrade to incompatible releases.

Actually, I'm currently personally more concerned about the breakage we
introduce in minor releases.  We'd need to solve that problem before we
can even begin to think about dealing with the major release issue.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online base backup from the hot-standby

2011-10-12 Thread Fujii Masao
2011/10/12 Jun Ishiduka ishizuka@po.ntts.co.jp:
  ERROR: full_page_writes on master is set invalid at least once since
  latest checkpoint
 
  I think this error should be rewritten as
  ERROR: full_page_writes on master has been off at some point since
  latest checkpoint
 
  We should be using 'off' instead of 'invalid' since that is what is what
  the user sets it to.

 Sure.

What about the following message? It sounds more precise to me.

ERROR: WAL generated with full_page_writes=off was replayed since last
restartpoint

 I updated to patch corresponded above-comments.

Thanks for updating the patch! Here are the comments:

 * don't yet have the insert lock, forcePageWrites could change under 
us,
 * but we'll recheck it once we have the lock.
 */
-   doPageWrites = fullPageWrites || Insert-forcePageWrites;
+   doPageWrites = Insert-fullPageWrites || Insert-forcePageWrites;

The source comment needs to be modified.

 * just turned off, we could recompute the record without full pages, 
but
 * we choose not to bother.)
 */
-   if (Insert-forcePageWrites  !doPageWrites)
+   if ((Insert-fullPageWrites || Insert-forcePageWrites)  
!doPageWrites)

Same as above.

+   LWLockAcquire(WALInsertLock, LW_EXCLUSIVE);
+   XLogCtl-Insert.fullPageWrites = fullPageWrites;
+   LWLockRelease(WALInsertLock);

I don't think WALInsertLock needs to be hold here because there is no
concurrently running process which can access Insert.fullPageWrites.
For example, Insert-currpos and Insert-LogwrtResult are also changed
without the lock there.

The source comment of XLogReportParameters() needs to be modified.

XLogReportParameters() should skip writing WAL if full_page_writes has not been
changed by SIGHUP.

XLogReportParameters() should skip updating pg_control if any parameter related
to hot standby has not been changed.

+   if (!fpw_manager)
+   {
+   LWLockAcquire(WALInsertLock, LW_EXCLUSIVE);
+   fpw = XLogCtl-Insert.fullPageWrites;
+   LWLockRelease(WALInsertLock);

It's safe to take WALInsertLock with shared mode here.

In checkpoint, XLogReportParameters() is called only when wal_level is
hot_standby.
OTOH, in walwriter, it's always called even when wal_level is not hot_standby.
Can't we skip calling XLogReportParameters() whenever wal_level is not
hot_standby?

In do_pg_start_backup() and do_pg_stop_backup(), the spinlock must be held to
see XLogCtl-lastFpwDisabledLSN.

+   /* check whether the master's FPW is 'off' since pg_start_backup. */
+   if (recovery_in_progress  XLByteLE(startpoint, 
XLogCtl-lastFpwDisabledLSN))
+   ereport(ERROR,
+   
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg(full_page_writes on master has been off at 
some point
during online backup)));

What about changing the error message to:
ERROR: WAL generated with full_page_writes=off was replayed during online backup

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhead cost of Serializable Snapshot Isolation

2011-10-12 Thread Simon Riggs
On Wed, Oct 12, 2011 at 6:34 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 11.10.2011 23:21, Simon Riggs wrote:

 If the normal default_transaction_isolation = read committed and all
 transactions that require serializable are explicitly marked in the
 application then there is no way to turn off SSI without altering the
 application. That is not acceptable, since it causes changes in
 application behaviour and possibly also performance issues.

 I don't get that. If all the transactions that require serializability are
 marked as such, why would you disable SSI for them? That would just break
 the application, since the transactions would no longer be serializable.

 If they don't actually need serializability, but repeatable read is enough,
 then mark them that way.

Obviously, if apps require serializability then turning
serializability off would break them. That is not what I have said,
nor clearly not what I would mean by turning off SSI.

The type of serializability we had in the past is now the same as
repeatable read. So the request is for a parameter that changes a
request for serializable into a request for repeatable read, so that
applications are provided with exactly what they had before, in 9.0
and earlier.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Heikki Linnakangas

On 12.10.2011 10:58, Stefan Kaltenbrunner wrote:

On 10/12/2011 09:53 AM, Martin Pitt wrote:

Hello all,

In https://launchpad.net/bugs/835502 it was reported that the 9.1.1
contrib *.sql files contain the token MODULE_PATHNAME, which is
unknown:

   psql test  /usr/share/postgresql/9.1/extension/intarray--1.0.sql

This fails with a ton of errors about the file MODULE_PATHNAME not
existing.

When I replace this with $libdir/_int, it works:

   sed 's!MODULE_PATHNAME!$libdir/_int!g' 
/usr/share/postgresql/9.1/extension/intarray--1.0.sql | psql test

Is that something I'm doing wrong in the packaging, or should the
contrib Makefiles be fixed to do this substitution?

It doesn't only affect intarray, but pretty much all *.sql files.


uh - the reason is that contrib is now packaged as extensions and that
you are supposed to use CREATE EXTENSION intarray; on the SQL level
instead of manually loading sql-scripts through psql.


9.1 has been out for only a couple of months, and we've seen a lot of 
people trying to do that already. In hindsight, we probably should've 
chosen a different filename extension for those files, to make it clear 
that you can't just run them in psql. It's too late for that, but a 
comment at the top of the .sql files would be good:


--- a/contrib/intarray/intarray--1.0.sql
+++ b/contrib/intarray/intarray--1.0.sql
@@ -1,4 +1,8 @@
-/* contrib/intarray/intarray--1.0.sql */
+/*
+ * contrib/intarray/intarray--1.0.sql
+ *
+ * Script file to be run by CREATE EXTENSION.
+ */

 --
 -- Create the user-defined type for the 1-D integer arrays (_int4)

The people trying to run these files with psql look inside the file when 
they get the error, so mentioning CREATE EXTENSION should give a hint 
on what to do.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhead cost of Serializable Snapshot Isolation

2011-10-12 Thread Simon Riggs
On Wed, Oct 12, 2011 at 8:50 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tis, 2011-10-11 at 21:50 +0100, Simon Riggs wrote:
 I'm keen to ensure people enjoy the possibility of upgrading to the
 latest release. The continual need to retest applications mean that
 very few users upgrade quickly or with anywhere near the frequency
 with which we put out new releases. What is the point of rushing out
 software that nobody can use? pg_upgrade doesn't change your
 applications, so there isn't a fast path to upgrade in the way you
 seem to think.

 This is a valid concern, which I share, but I think adding a few
 configuration parameters of the nature, this setting really means what
 this setting meant in the old release is only the tip of the iceberg.
 Ensuring full compatibility between major releases would require an
 extraordinary amount of effort, including a regression test suite that
 would be orders of magnitude larger than what we currently have.  I
 frankly don't see the resources to do that.

 The workaround strategy is that we maintain backbranches, so that users
 are not forced to upgrade to incompatible releases.

 Actually, I'm currently personally more concerned about the breakage we
 introduce in minor releases.  We'd need to solve that problem before we
 can even begin to think about dealing with the major release issue.

Thanks, these look like reasonable discussion points with no personal
comments added.

I agree that config parameters don't solve the whole problem, though
much of the iceberg is already covered with them. Currently about half
of our parameters are either on/off behaviour switches. Right now we
are inconsistent about whether we add a parameter for major features:
sync_scans, hot_standby, partial vacuum all had ways of disabling them
if required - virtually all features can be disabled, bgwriter,
autovacuum etc even though it is almost never a recommendation to do
so. I can't see a good argument for including some switches, but not
others. SSI is a complex new feature and really should have an off
switch.

Right now, we've had one report and a benchmark that show severe
performance degradation and that might have benefited from turning it
off. That is not sufficient at this point to convince some people, so
I am happy to wait to see if further reports emerge. SSI doesn't
affect everybody.

Yes, I agree that the only really good answer in the general case is
to maintain back branches, or provide enhanced versions as some
vendors choose to do. That is not my first thought, and try quite hard
to put my (/our) best work into mainline Postgres.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Magnus Hagander
On Wed, Oct 12, 2011 at 10:39, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 12.10.2011 10:58, Stefan Kaltenbrunner wrote:

 On 10/12/2011 09:53 AM, Martin Pitt wrote:

 Hello all,

 In https://launchpad.net/bugs/835502 it was reported that the 9.1.1
 contrib *.sql files contain the token MODULE_PATHNAME, which is
 unknown:

   psql test  /usr/share/postgresql/9.1/extension/intarray--1.0.sql

 This fails with a ton of errors about the file MODULE_PATHNAME not
 existing.

 When I replace this with $libdir/_int, it works:

   sed 's!MODULE_PATHNAME!$libdir/_int!g'
 /usr/share/postgresql/9.1/extension/intarray--1.0.sql | psql test

 Is that something I'm doing wrong in the packaging, or should the
 contrib Makefiles be fixed to do this substitution?

 It doesn't only affect intarray, but pretty much all *.sql files.

 uh - the reason is that contrib is now packaged as extensions and that
 you are supposed to use CREATE EXTENSION intarray; on the SQL level
 instead of manually loading sql-scripts through psql.

 9.1 has been out for only a couple of months, and we've seen a lot of people
 trying to do that already. In hindsight, we probably should've chosen a
 different filename extension for those files, to make it clear that you
 can't just run them in psql. It's too late for that, but a comment at the
 top of the .sql files would be good:

 --- a/contrib/intarray/intarray--1.0.sql
 +++ b/contrib/intarray/intarray--1.0.sql
 @@ -1,4 +1,8 @@
 -/* contrib/intarray/intarray--1.0.sql */
 +/*
 + * contrib/intarray/intarray--1.0.sql
 + *
 + * Script file to be run by CREATE EXTENSION.
 + */

  --
  -- Create the user-defined type for the 1-D integer arrays (_int4)

 The people trying to run these files with psql look inside the file when
 they get the error, so mentioning CREATE EXTENSION should give a hint on
 what to do.

Hmm. is there some way that we could make it do something that would
affect only psql? I guess not, because any kind of \-command would
actually break the CREATE EXTENSION running of the script, right?

But it would be useful to be able to inject something there that psql
would notice but the backend would ignore...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread Amit Khandekar
Hi,

An example in the PG documentation gives an error:

http://www.postgresql.org/docs/9.1/interactive/plperl-global.html

CREATE OR REPLACE FUNCTION myfuncs() RETURNS void AS $$
$_SHARED{myquote} = sub {
my $arg = shift;
$arg =~ s/(['\\])/\\$1/g;
return '$arg';
};
$$ LANGUAGE plperl;

SELECT myfuncs(); /* initializes the function */

ERROR:  PL/Perl function must return reference to hash or array
CONTEXT:  PL/Perl function myfuncs

Not sure if this is now an expected behaviour. Is it? Accordingly, I
can open this in pgsql-bugs or put this issue in pgsql-docs.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] loss of transactions in streaming replication

2011-10-12 Thread Fujii Masao
Hi,

In 9.2dev and 9.1, when walreceiver detects an error while sending data to
WAL stream, it always emits ERROR even if there are data available in the
receive buffer. This might lead to loss of transactions because such
remaining data are not received by walreceiver :(

To prevent transaction loss, I'm thinking to change walreceiver so that it
always ignores an error (specifically, emits COMMERROR instead of ERROR)
during sending data. Then walreceiver receives data if available. If an error
occurrs during receiving data, walreceiver can emit ERROR this time.
Comments? Better ideas?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] DROP statement reworks

2011-10-12 Thread Kohei KaiGai
Robert,

I'm currently trying to revise my patches according to your suggestions,
but I'm facing a trouble about error messages when user tries to drop
a non-exists object.

Because the ObjectProperty array has an entry for each catalogs, it is
unavailable to hold the name of object type (such as table or index)
when multiple object types are associated with a particular system
catalog, such as pg_class, pg_type or pg_proc.

How should I implement the following block?

if (!OidIsValid(address.objectId))
{
ereport(NOTICE,
(errmsg(%s \%s\ does not exist, skipping,
get_object_property_typetext(stmt-removeType),
NameListToString(objname;
continue;
}

One idea is to add a separated array to translate from OBJECT_* to
its text representation. (Maybe, it can be used to translattions with
opposite direction.)

Thanks,

2011/10/11 Robert Haas robertmh...@gmail.com:
 On Mon, Oct 10, 2011 at 1:38 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 I'm sorry again. I tought it was obvious from the filenames.

 I guess I got confused because you re-posted part 2 without the other
 parts, and I got mixed up and thought you were reposting part one.

 I've committed a stripped-down version of the part one patch, which
 had several mistakes even in just the part I committed - e.g., you
 forgot TABLESPACEOID.  I also did some renaming for clarity.

 I'm going to throw this back to you for rebasing at this point, which
 I realize is going to be somewhat of an unenjoyable task given the way
 I cut up your changes to objectaddress.c, but I wasn't very confident
 that all of the entries were correct (the one for attributes seemed
 clearly wrong to me, for example), and I didn't want to commit a bunch
 of stuff that wasn't going to be exercised.  I suggest that you merge
 the remainder of the part-one changes into part-two.  On the flip
 side, I think you should take the stuff that deals with dropping
 relations OUT of part two.  I don't see what good it does us to try to
 centralize the drop logic if we still have to have special cases for
 relations, so let's just leave that separate for now until we figure
 out a better approach, or at least split it off as a separate patch so
 that it doesn't hold up all the other changes.

 I think get_object_namespace() needs substantial revision.  Instead of
 passing the object type and the object address, why not just pass the
 object address?  You should be able to use the classId in the address
 to figure out everything you need to know.  Since this function is
 private to objectaddress.c, there's no reason for it to use those
 accessor functions - it can just iterate through the array just as
 object_exists() does.  That way you also avoid iterating through the
 array multiple times.  I also think that we probably ought to revise
 AlterObjectNamespace() to make use of this new machinery, instead of
 making the caller pass in all the same information that
 objectaddress.c is now learning how to provide.  That would possibly
 open the way to a bunch more consolidation of the SET SCHEMA code; in
 fact, we might want to clean that up first, before dealing with the
 DROP stuff.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company




-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhead cost of Serializable Snapshot Isolation

2011-10-12 Thread Florian Pflug
On Oct11, 2011, at 23:35 , Simon Riggs wrote:
 On Tue, Oct 11, 2011 at 10:30 PM, Florian Pflug f...@phlo.org wrote:
 
 That experience has taught me that backwards compatibility, while very
 important in a lot of cases, has the potential to do just as much harm
 if overdone.
 
 Agreed. Does my suggestion represent overdoing it? I ask for balance,
 not an extreme.

It's my belief that an off switch for true serializability is overdoing
it, yes.

With such a switch, every application that relies on true serializability for
correctness would be prone to silent data corruption should the switch ever
get set to off accidentally.

Without such a switch, OTOH, all that will happen are a few more aborts due to
serialization errors in application who request SERIALIZABLE when they really
only need REPEATABLE READ. Which, in the worst case, is a performance issue,
but never an issue of correctness.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] DROP statement reworks

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 8:07 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 I'm currently trying to revise my patches according to your suggestions,
 but I'm facing a trouble about error messages when user tries to drop
 a non-exists object.

 Because the ObjectProperty array has an entry for each catalogs, it is
 unavailable to hold the name of object type (such as table or index)
 when multiple object types are associated with a particular system
 catalog, such as pg_class, pg_type or pg_proc.

 How should I implement the following block?

        if (!OidIsValid(address.objectId))
        {
            ereport(NOTICE,
                    (errmsg(%s \%s\ does not exist, skipping,
                            get_object_property_typetext(stmt-removeType),
                            NameListToString(objname;
            continue;
        }

 One idea is to add a separated array to translate from OBJECT_* to
 its text representation. (Maybe, it can be used to translattions with
 opposite direction.)

For reasons of translation, you can't do something like %s \%s\
does not exist, skipping.  Instead I think you need an array that
works something like dropmsgstringarray[], but based on the OBJECT_*
constants rather than the RELKIND_* constants.  IOW, it maps the
object type to the full error message, not just the name of the object
type.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] DROP statement reworks

2011-10-12 Thread Kohei KaiGai
2011/10/12 Robert Haas robertmh...@gmail.com:
 On Wed, Oct 12, 2011 at 8:07 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 I'm currently trying to revise my patches according to your suggestions,
 but I'm facing a trouble about error messages when user tries to drop
 a non-exists object.

 Because the ObjectProperty array has an entry for each catalogs, it is
 unavailable to hold the name of object type (such as table or index)
 when multiple object types are associated with a particular system
 catalog, such as pg_class, pg_type or pg_proc.

 How should I implement the following block?

        if (!OidIsValid(address.objectId))
        {
            ereport(NOTICE,
                    (errmsg(%s \%s\ does not exist, skipping,
                            get_object_property_typetext(stmt-removeType),
                            NameListToString(objname;
            continue;
        }

 One idea is to add a separated array to translate from OBJECT_* to
 its text representation. (Maybe, it can be used to translattions with
 opposite direction.)

 For reasons of translation, you can't do something like %s \%s\
 does not exist, skipping.  Instead I think you need an array that
 works something like dropmsgstringarray[], but based on the OBJECT_*
 constants rather than the RELKIND_* constants.  IOW, it maps the
 object type to the full error message, not just the name of the object
 type.

OK, I'll revise the code based on this idea.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Andrew Dunstan



On 10/12/2011 04:39 AM, Heikki Linnakangas wrote:

On 12.10.2011 10:58, Stefan Kaltenbrunner wrote:

On 10/12/2011 09:53 AM, Martin Pitt wrote:

Hello all,

In https://launchpad.net/bugs/835502 it was reported that the 9.1.1
contrib *.sql files contain the token MODULE_PATHNAME, which is
unknown:

   psql test  /usr/share/postgresql/9.1/extension/intarray--1.0.sql

This fails with a ton of errors about the file MODULE_PATHNAME not
existing.

When I replace this with $libdir/_int, it works:

   sed 's!MODULE_PATHNAME!$libdir/_int!g' 
/usr/share/postgresql/9.1/extension/intarray--1.0.sql | psql test


Is that something I'm doing wrong in the packaging, or should the
contrib Makefiles be fixed to do this substitution?

It doesn't only affect intarray, but pretty much all *.sql files.


uh - the reason is that contrib is now packaged as extensions and that
you are supposed to use CREATE EXTENSION intarray; on the SQL level
instead of manually loading sql-scripts through psql.


9.1 has been out for only a couple of months, and we've seen a lot of 
people trying to do that already. In hindsight, we probably should've 
chosen a different filename extension for those files, to make it 
clear that you can't just run them in psql. It's too late for that, 
but a comment at the top of the .sql files would be good:





I've made this mistake myself in an unthinking moment. I suggest that we 
deprecate calling them something.sql and add code ASAP providing for 
some other suffix ( .xtn ?) with legacy support for falling back to 
.sql. I'd almost be inclined to backpatch it while there are so few 
third party extensions out there.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhead cost of Serializable Snapshot Isolation

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 8:44 AM, Florian Pflug f...@phlo.org wrote:
 With such a switch, every application that relies on true serializability for
 correctness would be prone to silent data corruption should the switch ever
 get set to off accidentally.

Agreed.

 Without such a switch, OTOH, all that will happen are a few more aborts due to
 serialization errors in application who request SERIALIZABLE when they really
 only need REPEATABLE READ. Which, in the worst case, is a performance issue,
 but never an issue of correctness.

Right.  And, in fairness:

1. The benchmark that I did was probably close to a worst-case
scenario for SSI.  Since there are no actual writes, there is no
possibility of serialization conflicts, but the system must still be
prepared for the possibility of a write (and, thus, potentially, a
conflict) at any time.  In addition, all of the transactions are very
short, magnifying the effect of transaction start and cleanup
overhead.  In real life, people who have this workload are unlikely to
use serializable mode in the first place.  The whole point of
serializability (not just SSI) is that it helps prevent anomalies when
you have complex transactions that could allow subtle serialization
anomalies to creep in.  Single-statement transactions that read (or
write) values based on a primary key are not the workload where you
have that problem.  You'd probably be happy to turn off MVCC
altogether if we had an option for that.

2. Our old SERIALIZABLE behavior (now REPEATABLE READ) is a pile of
garbage.  Since Kevin started beating the drum about SSI, I've come
across (and posted about) situations where REPEATABLE READ read causes
serialization anomalies that don't exist at the READ COMMITTED level
(which is exactly the opposite of what is really supposed to happen -
REPEATABLE READ is supposed to provide more isolation, not less); and
Kevin's pointed out many situations where REPEATABLE READ utterly
fails to deliver serializable behavior.  I'm not exactly thrilled with
these benchmark results, but going back to a technology that doesn't
work is not better.  If individual users want to request that
defective behavior for their applications, I am fine with giving them
that option, and we have.  But if people actually want serializability
and we given them REPEATABLE READ, then they're going to get wrong
behavior.  The fact that we've been shipping that wrong behavior for
years and years for lack of anything better is not a reason to
continue doing it.

I agree with Tom's comment upthread that the best thing to do here is
put some effort into improving SSI.  I think it's probably going to
perform adequately for the workloads where people actually need it,
but I'd certainly like to see us make it better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] loss of transactions in streaming replication

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 5:45 AM, Fujii Masao masao.fu...@gmail.com wrote:
 In 9.2dev and 9.1, when walreceiver detects an error while sending data to
 WAL stream, it always emits ERROR even if there are data available in the
 receive buffer. This might lead to loss of transactions because such
 remaining data are not received by walreceiver :(

Won't it just reconnect?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 2:50 AM, Jeff Davis pg...@j-davis.com wrote:
 On Tue, 2011-10-11 at 13:22 -0400, Robert Haas wrote:
 The real issue is that the costing estimates need to be accurate, and
 that's where the rubber hits the road.  Otherwise, even if we pick the
 right way to scan the table, we may do silly things up the line when
 we go to start constructing the join order.  I think we need to beef
 up ANALYZE to gather statistics on the fraction of the pages that are
 marked all-visible, or maybe VACUUM should gather that information.
 The trouble is that if we VACUUM and then ANALYZE, we'll often get
 back a value very close to 100%, but then the real value may diminish
 quite a bit before the next auto-analyze fires.  I think if we can
 figure out what to do about that problem we'll be well on our way...

 Can you send stats messages to keep track when you unset a bit in the
 VM? That might allow it to be more up-to-date.

In theory, that seems like it would work, although I'm a little
worried about the overhead.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 4:42 AM, Magnus Hagander mag...@hagander.net wrote:
 9.1 has been out for only a couple of months, and we've seen a lot of people
 trying to do that already. In hindsight, we probably should've chosen a
 different filename extension for those files, to make it clear that you
 can't just run them in psql. It's too late for that, but a comment at the
 top of the .sql files would be good:

 --- a/contrib/intarray/intarray--1.0.sql
 +++ b/contrib/intarray/intarray--1.0.sql
 @@ -1,4 +1,8 @@
 -/* contrib/intarray/intarray--1.0.sql */
 +/*
 + * contrib/intarray/intarray--1.0.sql
 + *
 + * Script file to be run by CREATE EXTENSION.
 + */

  --
  -- Create the user-defined type for the 1-D integer arrays (_int4)

 The people trying to run these files with psql look inside the file when
 they get the error, so mentioning CREATE EXTENSION should give a hint on
 what to do.

 Hmm. is there some way that we could make it do something that would
 affect only psql? I guess not, because any kind of \-command would
 actually break the CREATE EXTENSION running of the script, right?

 But it would be useful to be able to inject something there that psql
 would notice but the backend would ignore...

We could do that, but I think Heikki's idea of adding a comment would
help a lot.  When people try to run the file through psql and it fails
due to the MODULE_PATHNAME stuff, the next thing they're probably
going to do is look at the file contents.  If they see something there
telling them to use CREATE EXTENSION, that will help.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Wed, Oct 12, 2011 at 2:50 AM, Jeff Davis pg...@j-davis.com wrote:
 On Tue, 2011-10-11 at 13:22 -0400, Robert Haas wrote:
 The real issue is that the costing estimates need to be accurate, and
 that's where the rubber hits the road.

 Can you send stats messages to keep track when you unset a bit in the
 VM? That might allow it to be more up-to-date.

 In theory, that seems like it would work, although I'm a little
 worried about the overhead.

I think it's overkill, and possibly unpleasantly unstable as well.
For the initial attack on this we should just have VACUUM and ANALYZE
count the number of all-visible blocks and store that in pg_class along
with the tuple-count statistics.  There's no reason to think that this
will be any worse than the way we deal with dead tuple counts, for
instance.

What bothers me considerably more is the issue about how specific
queries might see an all-visible fraction that's very substantially
different from the table's overall ratio, especially in examples such as
historical-data tables where most of the update and query activity has
to do with recently-added rows.  I don't see any practical way to attack
that problem with statistics; we're just going to have to adopt some
estimation rule.  What I suggest as a first cut for that is: simply
derate the visibility fraction as the fraction of the table expected to
be scanned gets smaller.  That is, if the query fetches nearly all of
the table, take the stored visibility ratio at face value; if it fetches
only one block, never believe that that will be an all-visible block;
and in general if we're expecting to read a fraction f of the pages,
multiply the whole-table visibility ratio by f before using it in the
cost estimate.  This amounts to assuming that the historical-data case
is the usual case, but I'm not sure that's unfair.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 We could do that, but I think Heikki's idea of adding a comment would
 help a lot.

+1.  Simple, easy, should help significantly.

Also, I disagree with the position that the files aren't SQL files.
Sure they are.   You'd want them treated as SQL by your editor, for
example.  So changing the extension is just wrong.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Greg Stark
On Wed, Oct 12, 2011 at 2:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I think it's overkill, and possibly unpleasantly unstable as well.
 For the initial attack on this we should just have VACUUM and ANALYZE
 count the number of all-visible blocks and store that in pg_class along
 with the tuple-count statistics.  There's no reason to think that this
 will be any worse than the way we deal with dead tuple counts, for
 instance.

So I have a theory.

Assuming you're in a steady-state situation the amount of all-visible
blocks will fluctuate from a high just after vacuum to a low just
before the next vacuum. There are other ways a block can be marked
all-visible but for the most part I would expect the fraction to go
steadily down until vacuum comes along and cleans things up.

So if vacuum tracked the fraction of blocks marked all-visible
*before* it processed them and the fraction it marked all-visible
after processing we have an upper and lower bound. If we knew how long
it's been since vacuum we could interpolate between those, or we could
just take the mean, or we could take the lower bound as a conservative
estimate.

 What I suggest as a first cut for that is: simply derate the visibility 
 fraction as the fraction
 of the table expected to be scanned gets smaller.

I think there's a statistically more rigorous way of accomplishing the
same thing. If you treat the pages we estimate we're going to read as
a random sample of the population of pages then your expected value is
the fraction of the overall population that is all-visible but your
95th percentile confidence interval will be, uh, a simple formula we
can compute but I don't recall off-hand.

This gets back to a discussion long-ago of what estimates the planner
should be using. It currently uses all expected values but in many
cases it would be valuable if the planner knew what the standard
deviation of those estimates was. It might sometimes be better to pick
a plan that's slightly worse on average but is less likely to be much
worse.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 9:52 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 What bothers me considerably more is the issue about how specific
 queries might see an all-visible fraction that's very substantially
 different from the table's overall ratio, especially in examples such as
 historical-data tables where most of the update and query activity has
 to do with recently-added rows.  I don't see any practical way to attack
 that problem with statistics; we're just going to have to adopt some
 estimation rule.  What I suggest as a first cut for that is: simply
 derate the visibility fraction as the fraction of the table expected to
 be scanned gets smaller.  That is, if the query fetches nearly all of
 the table, take the stored visibility ratio at face value; if it fetches
 only one block, never believe that that will be an all-visible block;
 and in general if we're expecting to read a fraction f of the pages,
 multiply the whole-table visibility ratio by f before using it in the
 cost estimate.  This amounts to assuming that the historical-data case
 is the usual case, but I'm not sure that's unfair.

I don't think that's an unfair assumption -- in fact I think it's
exactly the right assumption -- but I'm worried about how the math
works out with that specific proposal.

- Suppose VACUUM processes the table and makes it all-visible.  Then,
somebody comes along and updates one tuple on every page, making them
all not-all-visible, but not trigger VACUUM because we're nowhere
close the 20% threshold.  Now COUNT(*) will think it should use an
index-scan, but really... not so much.  In fact, even if it's only
that a tuple has been updated on 25% of the pages, we're probably in
trouble.

- Suppose the table has a million rows and we're going to read 100 of
them, or 0.01%.  Now it might appear that a covering index has a
negligible advantage over a non-covering index, but in fact I think we
still want to err on the side of trying to use the covering index.  In
fact, even if we're only reading a single row, we probably still
generally want to pick up the covering index, to cater to the case
where someone is doing primary key fetches against a gigantic table
and hoping that index-only scans will save them from random I/O hell.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 Assuming you're in a steady-state situation the amount of all-visible
 blocks will fluctuate from a high just after vacuum to a low just
 before the next vacuum. There are other ways a block can be marked
 all-visible but for the most part I would expect the fraction to go
 steadily down until vacuum comes along and cleans things up.

 So if vacuum tracked the fraction of blocks marked all-visible
 *before* it processed them and the fraction it marked all-visible
 after processing we have an upper and lower bound. If we knew how long
 it's been since vacuum we could interpolate between those, or we could
 just take the mean, or we could take the lower bound as a conservative
 estimate.

I thought of that too, but we don't do the comparable thing for dead
tuple counts, and I am not convinced that we should do it for
visibility.  I'd rather have a simple rule that it's right immediately
after VACUUM, so that at least trivial cases like read-only tables
work correctly.

 What I suggest as a first cut for that is: simply derate the visibility 
 fraction as the fraction
 of the table expected to be scanned gets smaller.

 I think there's a statistically more rigorous way of accomplishing the
 same thing. If you treat the pages we estimate we're going to read as
 a random sample of the population of pages then your expected value is
 the fraction of the overall population that is all-visible but your
 95th percentile confidence interval will be, uh, a simple formula we
 can compute but I don't recall off-hand.

The problem is precisely that the pages a query is going to read are
likely to *not* be a random sample, but to be correlated with
recently-dirtied pages.

 ... It currently uses all expected values but in many
 cases it would be valuable if the planner knew what the standard
 deviation of those estimates was.

No doubt, but I'm not volunteering to fix that before we can have a
non-toy estimate for index-only scans.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Andrew Dunstan



On 10/12/2011 10:12 AM, Tom Lane wrote:

Robert Haasrobertmh...@gmail.com  writes:

We could do that, but I think Heikki's idea of adding a comment would
help a lot.

+1.  Simple, easy, should help significantly.

Also, I disagree with the position that the files aren't SQL files.
Sure they are.   You'd want them treated as SQL by your editor, for
example.  So changing the extension is just wrong.



*shrug* ok. Another thought I had was to have the file raise an error 
and have that filtered out by the extension mechanism. But I'm not sure 
if it's worth the trouble.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Wed, Oct 12, 2011 at 9:52 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 What bothers me considerably more is the issue about how specific
 queries might see an all-visible fraction that's very substantially
 different from the table's overall ratio,

 - Suppose VACUUM processes the table and makes it all-visible.  Then,
 somebody comes along and updates one tuple on every page, making them
 all not-all-visible, but not trigger VACUUM because we're nowhere
 close the 20% threshold.  Now COUNT(*) will think it should use an
 index-scan, but really... not so much.  In fact, even if it's only
 that a tuple has been updated on 25% of the pages, we're probably in
 trouble.

Yeah, but that would be a pretty unlucky pattern, and in any case the
fix for it is going to be to make autovacuum more aggressive.

 - Suppose the table has a million rows and we're going to read 100 of
 them, or 0.01%.  Now it might appear that a covering index has a
 negligible advantage over a non-covering index, but in fact I think we
 still want to err on the side of trying to use the covering index.

Given that fact pattern we still will, I think.  We'll still prefer an
indexscan over a seqscan, for sure.  In any case, if you believe the
assumption that those 100 rows are more likely to be recently-dirtied
than the average row, I'm not sure why you think we should be trying to
force an assumption that index-only will succeed here.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 *shrug* ok. Another thought I had was to have the file raise an error 
 and have that filtered out by the extension mechanism. But I'm not sure 
 if it's worth the trouble.

Hmm ...

\echo You should use CREATE EXTENSION foo to load this file!

and teach CREATE EXTENSION to drop any line beginning with \echo?
The latter part seems easy enough, but I'm not quite sure about the
wording or placement of the \echo command.  Putting it at the top
feels natural but the message might scroll offscreen due to errors...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Aidan Van Dyk
On Wed, Oct 12, 2011 at 10:37 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 - Suppose the table has a million rows and we're going to read 100 of
 them, or 0.01%.  Now it might appear that a covering index has a
 negligible advantage over a non-covering index, but in fact I think we
 still want to err on the side of trying to use the covering index.

 Given that fact pattern we still will, I think.  We'll still prefer an
 indexscan over a seqscan, for sure.  In any case, if you believe the
 assumption that those 100 rows are more likely to be recently-dirtied
 than the average row, I'm not sure why you think we should be trying to
 force an assumption that index-only will succeed here.

The elephant in the room is that the index-only-scan really doesn't
save a *whole* lot if the heap pages are already in shared buffers.
But it matters a *lot* when they heap pages are not in shared buffers
(both ways, saving IO, or causing lots of random IO)

Can we hope that if pages are not in shared buffers, they are not
recently modified, so hopefully both all visible, and have the VM
bit?set?  Or does the table-based nature of vacuum mean there is no
value there?

-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Greg Stark
On Wed, Oct 12, 2011 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 What I suggest as a first cut for that is: simply derate the visibility 
 fraction as the fraction
 of the table expected to be scanned gets smaller.

 I think there's a statistically more rigorous way of accomplishing the
 same thing. If you treat the pages we estimate we're going to read as
 a random sample of the population of pages then your expected value is
 the fraction of the overall population that is all-visible but your
 95th percentile confidence interval will be, uh, a simple formula we
 can compute but I don't recall off-hand.

 The problem is precisely that the pages a query is going to read are
 likely to *not* be a random sample, but to be correlated with
 recently-dirtied pages.

Sure, but I was suggesting aiming for the nth percentile rather than a
linear factor which I don't know has any concrete meaning.


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Heikki Linnakangas

On 12.10.2011 17:33, Magnus Hagander wrote:

How about adding something like
-- \\psql_hates_this
-- rest of comment

and then at least have new versions of psql find that and stop
processing the file with a more useful error at that point? Or maybe
that's overengineering..


Overengineering IMHO. Besides, if a psql poison comment like that 
exists, then we'd have to be careful not to emit one elsewhere. Think 
pg_dump, if someone puts that comment in a function body...


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Aidan Van Dyk
On Wed, Oct 12, 2011 at 10:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Andrew Dunstan and...@dunslane.net writes:
 *shrug* ok. Another thought I had was to have the file raise an error
 and have that filtered out by the extension mechanism. But I'm not sure
 if it's worth the trouble.

 Hmm ...

 \echo You should use CREATE EXTENSION foo to load this file!

 and teach CREATE EXTENSION to drop any line beginning with \echo?
 The latter part seems easy enough, but I'm not quite sure about the
 wording or placement of the \echo command.  Putting it at the top
 feels natural but the message might scroll offscreen due to errors...

Decorate them with a marker like:
   \extension name version

And make the CREATE EXTENSION skip (or verify) it?

It will make psql stop on the \extension command.

a.


-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Aidan Van Dyk ai...@highrise.ca writes:
 On Wed, Oct 12, 2011 at 10:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Hmm ...
 \echo You should use CREATE EXTENSION foo to load this file!

 Decorate them with a marker like:
\extension name version
 And make the CREATE EXTENSION skip (or verify) it?
 It will make psql stop on the \extension command.

No, the point is not to stop or fail, it is to print out an unmistakable
user instruction.  Otherwise we'll still be getting cube.sql failed to
load for me bug reports.  So I think \echo is entirely sufficient,
and we should not rely on psql features that aren't there yet.  Ideally
this should do what we want even in older psql releases.  \echo has been
there at least since 7.0.

It strikes me that we could get rid of the error message clutter
I worried about before if we coded like this:

/* contrib/foo--1.0.sql */

\echo Use CREATE EXTENSION foo to load this file. \quit

... SQL commands here ...

The forced \quit is a bit unfriendly maybe but it will get the job done.
And again, this isn't making any assumptions about which psql version
you're using.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 On Wed, Oct 12, 2011 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The problem is precisely that the pages a query is going to read are
 likely to *not* be a random sample, but to be correlated with
 recently-dirtied pages.

 Sure, but I was suggesting aiming for the nth percentile rather than a
 linear factor which I don't know has any concrete meaning.

Well, I have no problem with using a more complicated estimation
equation, but it might be nice to get some field experience with the
thing before we start complicating matters.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 12.10.2011 17:33, Magnus Hagander wrote:
 How about adding something like
 -- \\psql_hates_this
 -- rest of comment
 
 and then at least have new versions of psql find that and stop
 processing the file with a more useful error at that point? Or maybe
 that's overengineering..

 Overengineering IMHO. Besides, if a psql poison comment like that 
 exists, then we'd have to be careful not to emit one elsewhere. Think 
 pg_dump, if someone puts that comment in a function body...

Well, it can't be a comment, but what about a real psql command?
See my suggestion of using \echo.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Andrew Dunstan



On 10/12/2011 11:15 AM, Tom Lane wrote:


\echo Use CREATE EXTENSION foo to load this file. \quit




+1 for this.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Kevin Grittner
Aidan Van Dyk ai...@highrise.ca wrote:
 
 The elephant in the room is that the index-only-scan really
 doesn't save a *whole* lot if the heap pages are already in shared
 buffers.
 
It's not hard to create a simple test case where it's about three
times slower to go to cached heap pages than to use the values from
the index.  That was just my first try, so it's not likely to be a
real worst case, although was using the default shared_memory
size, so a lot of the heap pages probably came from the OS cache,
rather than being in shared memory.
 
 But it matters a *lot* when they heap pages are not in shared
 buffers
 
Yeah, obviously it matters more if you actually need to add a random
disk read.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Greg Stark
On Wed, Oct 12, 2011 at 4:26 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 But it matters a *lot* when they heap pages are not in shared
 buffers

 Yeah, obviously it matters more if you actually need to add a random
 disk read.

To be fair the indexes are also random I/O. So the case that really
matters is when the index fits in RAM but the heap does not.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Tom Lane
Aidan Van Dyk ai...@highrise.ca writes:
 The elephant in the room is that the index-only-scan really doesn't
 save a *whole* lot if the heap pages are already in shared buffers.
 But it matters a *lot* when they heap pages are not in shared buffers
 (both ways, saving IO, or causing lots of random IO)

 Can we hope that if pages are not in shared buffers, they are not
 recently modified, so hopefully both all visible, and have the VM
 bit?set?  Or does the table-based nature of vacuum mean there is no
 value there?

Hmm, that's an interesting point.  If you suppose that recently-modified
pages are likely to still be in memory then it could well be that an
index-only scan is relatively cheap (i.e., not many actual disk reads)
no matter whether it hits recently-modified pages or not.  So maybe the
first cut should just be to measure the overall visibility fraction and
use that at face value.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 11:24 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Overengineering IMHO. Besides, if a psql poison comment like that
 exists, then we'd have to be careful not to emit one elsewhere. Think
 pg_dump, if someone puts that comment in a function body...

 Well, it can't be a comment, but what about a real psql command?
 See my suggestion of using \echo.

 Frankly I think a comment is sufficient. We can make it more complicated
 later if people are still confused.

+1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Heikki Linnakangas

On 12.10.2011 18:20, Tom Lane wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  writes:

On 12.10.2011 17:33, Magnus Hagander wrote:

How about adding something like
-- \\psql_hates_this
-- rest of comment

and then at least have new versions of psql find that and stop
processing the file with a more useful error at that point? Or maybe
that's overengineering..



Overengineering IMHO. Besides, if a psql poison comment like that
exists, then we'd have to be careful not to emit one elsewhere. Think
pg_dump, if someone puts that comment in a function body...


Well, it can't be a comment, but what about a real psql command?
See my suggestion of using \echo.


Frankly I think a comment is sufficient. We can make it more complicated 
later if people are still confused.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 12.10.2011 18:20, Tom Lane wrote:
 Well, it can't be a comment, but what about a real psql command?
 See my suggestion of using \echo.

 Frankly I think a comment is sufficient. We can make it more complicated 
 later if people are still confused.

The thing is that this will be the third time we've gone back to try to
make it more apparent that you should use CREATE EXTENSION, and I no
longer believe that mere documentation is really going to get the job
done.  Putting in a comment will only stop the bug reports from people
who bother to examine the script contents before filing a report, but
the kind of folks who don't read the release notes probably won't do
that either.  In fact, if we just put in a comment, I confidently
predict we'll be coming back to revisit this issue again in future.

The only thing the \echo approach will cost us is a few more lines of C
code in execute_extension_script(), and I think it's more than worth
that given the evident scope of the problem.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 10:37 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Wed, Oct 12, 2011 at 9:52 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 What bothers me considerably more is the issue about how specific
 queries might see an all-visible fraction that's very substantially
 different from the table's overall ratio,

 - Suppose VACUUM processes the table and makes it all-visible.  Then,
 somebody comes along and updates one tuple on every page, making them
 all not-all-visible, but not trigger VACUUM because we're nowhere
 close the 20% threshold.  Now COUNT(*) will think it should use an
 index-scan, but really... not so much.  In fact, even if it's only
 that a tuple has been updated on 25% of the pages, we're probably in
 trouble.

 Yeah, but that would be a pretty unlucky pattern, and in any case the
 fix for it is going to be to make autovacuum more aggressive.

Hmm, maybe.

 - Suppose the table has a million rows and we're going to read 100 of
 them, or 0.01%.  Now it might appear that a covering index has a
 negligible advantage over a non-covering index, but in fact I think we
 still want to err on the side of trying to use the covering index.

 Given that fact pattern we still will, I think.  We'll still prefer an
 indexscan over a seqscan, for sure.  In any case, if you believe the
 assumption that those 100 rows are more likely to be recently-dirtied
 than the average row, I'm not sure why you think we should be trying to
 force an assumption that index-only will succeed here.

I'm not concerned about an index scan vs. a sequential scan here.  I'm
concerned about it being impossible for the DBA to get an index-only
scan when s/he wants it very badly.  The current (stupid) formula
handles this case just about perfectly - it will prefer a smaller
index over a larger one, except when a covering index is available, in
which case it will prefer the smallest covering index.  That sounds
exactly right to me.  We get that behavior because the 10% of heap
fetches that we're assuming we'll get to skip is larger than the
penalty for using a bigger index.  If we take out 10% and replace it
by all_visible_percentage * fraction_of_tuples_fetched, then that 10%
is going to drop to some infinitesmally small value on single row
fetches from giant tables.  But that's exactly one of the cases for
which people want index-only scans in the first place.  It's no better
to be overly pessimistic here than it is to be overly optimistic.  If
the table is 90% all-visible, the probability of our finding an
all-visible row is probably not 90%.  But it's probably not 0.01% or
0.0001% either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I'm not concerned about an index scan vs. a sequential scan here.  I'm
 concerned about it being impossible for the DBA to get an index-only
 scan when s/he wants it very badly.  The current (stupid) formula
 handles this case just about perfectly - it will prefer a smaller
 index over a larger one, except when a covering index is available, in
 which case it will prefer the smallest covering index.  That sounds
 exactly right to me.  We get that behavior because the 10% of heap
 fetches that we're assuming we'll get to skip is larger than the
 penalty for using a bigger index.  If we take out 10% and replace it
 by all_visible_percentage * fraction_of_tuples_fetched, then that 10%
 is going to drop to some infinitesmally small value on single row
 fetches from giant tables.  But that's exactly one of the cases for
 which people want index-only scans in the first place.  It's no better
 to be overly pessimistic here than it is to be overly optimistic.  If
 the table is 90% all-visible, the probability of our finding an
 all-visible row is probably not 90%.  But it's probably not 0.01% or
 0.0001% either.

I think you're overstating the size of the problem.  Given that fact
pattern, the thing will choose an indexscan anyway, because it's the
cheapest alternative even under traditional costing rules.  And it will
choose to use an index-only scan if the index is covering.  It doesn't
matter what the exact cost estimate is.

The place where the decision is actually somewhat hard, IMO, is where
you're pulling a small part of the table but significantly more than one
row, and the traditional best choice would be a bitmap scan.  Now we
have to guess whether possibly avoiding heap fetches is better than
batching the fetches, and it doesn't seem open-and-shut to me.

But having said that, I see some credibility in Aidan's suggestion that
pages that actually have to be fetched from disk are disproportionately
likely to be all-visible.  That would counteract the history-table
effect of correlation between current reads and recent changes,
probably not perfectly, but to a considerable extent.  So right at the
moment I'm inclined to just apply the most-recently-measured visibility
fraction at face value.  We shouldn't complicate matters more than that
until we have more field experience to tell us what really happens.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread David E. Wheeler
On Oct 12, 2011, at 2:16 AM, Amit Khandekar wrote:

 CREATE OR REPLACE FUNCTION myfuncs() RETURNS void AS $$
$_SHARED{myquote} = sub {
my $arg = shift;
$arg =~ s/(['\\])/\\$1/g;
return '$arg';
};
 $$ LANGUAGE plperl;
 
 SELECT myfuncs(); /* initializes the function */
 
 ERROR:  PL/Perl function must return reference to hash or array
 CONTEXT:  PL/Perl function myfuncs
 
 Not sure if this is now an expected behaviour. Is it? Accordingly, I
 can open this in pgsql-bugs or put this issue in pgsql-docs.

Seems like there should be a bar return at the end of the function, otherwise 
it returns the last expression, which happens to be a code reference. Not very 
useful in a function that should return VOID. New version:

CREATE OR REPLACE FUNCTION myfuncs() RETURNS void AS $$
   $_SHARED{myquote} = sub {
   my $arg = shift;
   $arg =~ s/(['\\])/\\$1/g;
   return '$arg';
   };
   return;
$$ LANGUAGE plperl;

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread Tom Lane
David E. Wheeler da...@kineticode.com 
CACoZds2D+0h-5euAxfpd9gQmiiW_MW9uv250Woz0=ego0sz...@mail.gmail.com writes:
 On Oct 12, 2011, at 2:16 AM, Amit Khandekar wrote:
 CREATE OR REPLACE FUNCTION myfuncs() RETURNS void AS $$
 $_SHARED{myquote} = sub {
 my $arg = shift;
 $arg =~ s/(['\\])/\\$1/g;
 return '$arg';
 };
 $$ LANGUAGE plperl;
 
 SELECT myfuncs(); /* initializes the function */
 
 ERROR:  PL/Perl function must return reference to hash or array
 CONTEXT:  PL/Perl function myfuncs
 
 Not sure if this is now an expected behaviour. Is it? Accordingly, I
 can open this in pgsql-bugs or put this issue in pgsql-docs.

 Seems like there should be a bar return at the end of the function, otherwise 
 it returns the last expression, which happens to be a code reference. Not 
 very useful in a function that should return VOID. New version:

Well, the real question is why a function declared to return VOID cares
at all about what the last command in its body is.  If this has changed
since previous versions then I think it's a bug and we should fix it,
not just change the example.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread David E. Wheeler
On Oct 12, 2011, at 9:15 AM, Tom Lane wrote:

 Well, the real question is why a function declared to return VOID cares
 at all about what the last command in its body is.  If this has changed
 since previous versions then I think it's a bug and we should fix it,
 not just change the example.

It has indeed changed, either in 9.0 or 9.1 (the latter, I think). I had to add 
a bunch of bare return statements to existing functions.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I'm not concerned about an index scan vs. a sequential scan here.  I'm
 concerned about it being impossible for the DBA to get an index-only
 scan when s/he wants it very badly.  The current (stupid) formula
 handles this case just about perfectly - it will prefer a smaller
 index over a larger one, except when a covering index is available, in
 which case it will prefer the smallest covering index.  That sounds
 exactly right to me.  We get that behavior because the 10% of heap
 fetches that we're assuming we'll get to skip is larger than the
 penalty for using a bigger index.  If we take out 10% and replace it
 by all_visible_percentage * fraction_of_tuples_fetched, then that 10%
 is going to drop to some infinitesmally small value on single row
 fetches from giant tables.  But that's exactly one of the cases for
 which people want index-only scans in the first place.  It's no better
 to be overly pessimistic here than it is to be overly optimistic.  If
 the table is 90% all-visible, the probability of our finding an
 all-visible row is probably not 90%.  But it's probably not 0.01% or
 0.0001% either.

 I think you're overstating the size of the problem.  Given that fact
 pattern, the thing will choose an indexscan anyway, because it's the
 cheapest alternative even under traditional costing rules.  And it will
 choose to use an index-only scan if the index is covering.  It doesn't
 matter what the exact cost estimate is.

Maybe I'm not being clear.  The case I'm worried about is when you
have a table with an id column (an integer) and a name column (text).
You have a unique index on (id), and an index on (id, name).  You then
do SELECT name FROM foo WHERE id = ?.  I understand it's going to pick
*an* index-scan, but which index is it going to pick?  The unique
index, because it's smaller, or the other one, because it's covering?
I think if the table is large your proposal will lead to ignoring the
covering index in favor of the smaller one, and I submit that's not
what we want, because, on the average, the index-only approach is
going to succeed often enough to

 The place where the decision is actually somewhat hard, IMO, is where
 you're pulling a small part of the table but significantly more than one
 row, and the traditional best choice would be a bitmap scan.  Now we
 have to guess whether possibly avoiding heap fetches is better than
 batching the fetches, and it doesn't seem open-and-shut to me.

Yes, I agree.

I was actually wondering if there is some way we could make index-only
scans work for bitmap index scans.  Something like this: If the index
is covering, then as we retrieve each tuple, we check whether the page
is all-visible.  If so, we return the data from the index tuple.  If
not, we save the TID for later.  Then, when we get done scanning the
index, we go back and fetch all the pages containing saved TIDs in
ascending block number order.  The trouble is that I think you could
get in some trouble if you use a TID bitmap here, because if you
lossify the bitmap then you have to make sure you can't return a tuple
that you already handled with the index-only approach (imagine that
the visibility map bit gets cleared partway through the scan).  All in
all this seems pretty complicated...

 But having said that, I see some credibility in Aidan's suggestion that
 pages that actually have to be fetched from disk are disproportionately
 likely to be all-visible.  That would counteract the history-table
 effect of correlation between current reads and recent changes,
 probably not perfectly, but to a considerable extent.  So right at the
 moment I'm inclined to just apply the most-recently-measured visibility
 fraction at face value.  We shouldn't complicate matters more than that
 until we have more field experience to tell us what really happens.

Fetches from disk aren't the only problem; thrashing shared_buffers is
pretty expensive, too.  But it's an interesting point.  I guess we
could give it a try and see what happens.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Magnus Hagander
On Wed, Oct 12, 2011 at 17:40, Tom Lane t...@sss.pgh.pa.us wrote:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 12.10.2011 18:20, Tom Lane wrote:
 Well, it can't be a comment, but what about a real psql command?
 See my suggestion of using \echo.

 Frankly I think a comment is sufficient. We can make it more complicated
 later if people are still confused.

 The thing is that this will be the third time we've gone back to try to
 make it more apparent that you should use CREATE EXTENSION, and I no
 longer believe that mere documentation is really going to get the job
 done.  Putting in a comment will only stop the bug reports from people
 who bother to examine the script contents before filing a report, but
 the kind of folks who don't read the release notes probably won't do
 that either.  In fact, if we just put in a comment, I confidently
 predict we'll be coming back to revisit this issue again in future.

That's exactly my concern - I strongly doubt those not bothering to
read that even for a major release, aren't going to review the source
of the SQL scrpit either.


 The only thing the \echo approach will cost us is a few more lines of C
 code in execute_extension_script(), and I think it's more than worth
 that given the evident scope of the problem.

+1.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-12 Thread Tom Lane
I wrote:
 I was also toying with the notion of pushing the slot fill-in into the
 AM, so that the AM API is to return a filled TupleSlot not an
 IndexTuple.  This would not save any cycles AFAICT --- at least in
 btree, we still have to make a palloc'd copy of the IndexTuple so that
 we can release lock on the index page.  The point of it would be to
 avoid the assumption that the index's internal storage has exactly the
 format of IndexTuple.  Don't know whether we'd ever have any actual use
 for that flexibility, but it seems like it wouldn't cost much to
 preserve the option.

 BTW, I concluded that that would be a bad idea, because it would involve
 the index AM in some choices that we're likely to want to change.  In
 particular it would foreclose ever doing anything with expression
 indexes, without an AM API change.  Better to just define the AM's
 responsibility as to hand back a tuple defined according to the index's
 columns.

Although this aspect of the code is now working well enough for btree,
I realized that it's going to have a problem if/when we add GiST
support.  The difficulty is that the index rowtype includes storage
datatypes, not underlying-heap-column datatypes, for opclasses where
those are different.  This is not going to do for cases where we need
to reconstruct a heap value from the index contents, as in Alexander's
example of gist point_ops using a box as the underlying storage.

What we actually want back from the index AM is a rowtype that matches
the list of values submitted for indexing (ie, the original output of
FormIndexDatum), and only for btree is it the case that that's
represented more or less exactly as the IndexTuple stored in the index.

So what I'm now thinking is to go back to the idea of having the index
AM fill in a TupleTableSlot.  For btree this would just amount to moving
the existing StoreIndexTuple function into the AM.  But it would give
GiST the opportunity to do some computation, and it would avoid the
problem of the index's rowtype not being a suitable intermediate format.
The objection I voiced above is misguided, because it confuses the set
of column types that's needed with the format distinction between a Slot
and an IndexTuple.

BTW, right at the moment I'm not that excited about actually doing
any work on GiST itself for index-only scans.  Given the current list of
available opclasses there don't seem to be very many for which
index-only scans would be possible, so the amount of work needed seems
rather out of proportion to the benefit.  But I don't mind fixing AM API
details that are needed to make this workable.  I'd rather have the API
as right as possible in the first release.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl restart - behaviour based on wrong instance

2011-10-12 Thread Magnus Hagander
On Tue, Oct 11, 2011 at 23:35, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Wed, Mar 23, 2011 at 1:48 AM, Fujii Masao masao.fu...@gmail.com wrote:
  On Sat, Mar 19, 2011 at 10:20 AM, Robert Haas robertmh...@gmail.com 
  wrote:
  On Fri, Mar 18, 2011 at 1:19 PM, Erik Rijkers e...@xs4all.nl wrote:
  This is OK and expected. ?But then it continues (in the logfile) with:
 
  FATAL: ?lock file postmaster.pid already exists
  HINT: ?Is another postmaster (PID 20519) running in data directory
  /var/data1/pg_stuff/pg_installations/pgsql.vanilla_1/data?
 
  So, complaints about the *other* instance. ?It doesn't happen once a 
  successful start (with pg_ctl
  start) has happened.
 
  I'm guessing that leftover postmaster.pid contents might be
  responsible for this?
 
  The cause is that pg_ctl restart uses the postmaster.opts which was
  created in the primary. Since its content was something like
  pg_ctl -D vanilla_1/data, vanilla_1/data/postmaster.pid was checked
  wrongly.
 
  The simple workaround is to exclude postmaster.opts from the backup
  as well as postmaster.pid. But when postmaster.opts doesn't exist,
  pg_ctl restart cannot start up the server. We might also need to change
  the code of pg_ctl restart so that it does just pg_ctl start when
  postmaster.opts doesn't exist.

 Sounds reasonable.

 I looked over this issue and I don't thinking having pg_ctl restart fall
 back to 'start' is a good solution.  I am concerned about cases where we
 start a different server without shutting down the old server, for some
 reason.  When they say 'restart', I think we have to assume they want a
 restart.

 What I did do was to document that not backing up postmaster.pid and
 postmaster.opts might help prevent pg_ctl from getting confused.

Should we exclude postmaster.opts from streaming base backups? We
already exclude postmaster.pid...


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Wed, Oct 12, 2011 at 17:40, Tom Lane t...@sss.pgh.pa.us wrote:
 The only thing the \echo approach will cost us is a few more lines of C
 code in execute_extension_script(), and I think it's more than worth
 that given the evident scope of the problem.

 +1.

PFA, a sample patch for this --- I've only bothered to change one script
file here, but will of course do the rest if there are no further
objections.  The technique actually works even better than I expected,
because of the seemingly nowhere documented fact that \quit in a script
file doesn't terminate psql, only processing of the script.  So what
I get is

regression=# \i ~/postgres/share/extension/cube--1.0.sql
Use CREATE EXTENSION cube to load this file.
regression=# 

which is about as good as one could hope for.

(Looks like a patch to the psql docs is upcoming, too.)

regards, tom lane


diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index ee9febe005315bf13e93a9ef216a7411fc13a445..0d259c0969d9df4a00f160bf1fc00407273e2b1d 100644
*** a/contrib/cube/cube--1.0.sql
--- b/contrib/cube/cube--1.0.sql
***
*** 1,5 
--- 1,7 
  /* contrib/cube/cube--1.0.sql */
  
+ \echo Use CREATE EXTENSION cube to load this file. \quit
+ 
  -- Create the user-defined type for N-dimensional boxes
  
  CREATE FUNCTION cube_in(cstring)
diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c
index 3af15dd38bb7044ec6f733787cad6897b204ccd3..ba1e2c45cd97c89265912d338a98f52bf48ea4b0 100644
*** a/src/backend/commands/extension.c
--- b/src/backend/commands/extension.c
***
*** 33,38 
--- 33,39 
  #include catalog/indexing.h
  #include catalog/namespace.h
  #include catalog/objectaccess.h
+ #include catalog/pg_collation.h
  #include catalog/pg_depend.h
  #include catalog/pg_extension.h
  #include catalog/pg_namespace.h
*** execute_extension_script(Oid extensionOi
*** 855,880 
  	CurrentExtensionObject = extensionOid;
  	PG_TRY();
  	{
! 		char	   *sql = read_extension_script_file(control, filename);
  
  		/*
  		 * If it's not relocatable, substitute the target schema name for
  		 * occcurrences of @extschema@.
  		 *
! 		 * For a relocatable extension, we just run the script as-is. There
! 		 * cannot be any need for @extschema@, else it wouldn't be
! 		 * relocatable.
  		 */
  		if (!control-relocatable)
  		{
  			const char *qSchemaName = quote_identifier(schemaName);
  
! 			sql = text_to_cstring(
!   DatumGetTextPP(
! 			DirectFunctionCall3(replace_text,
! 	CStringGetTextDatum(sql),
! 		  CStringGetTextDatum(@extschema@),
! 		 CStringGetTextDatum(qSchemaName;
  		}
  
  		/*
--- 856,894 
  	CurrentExtensionObject = extensionOid;
  	PG_TRY();
  	{
! 		char	   *c_sql = read_extension_script_file(control, filename);
! 		Datum		t_sql;
! 
! 		/* We use various functions that want to operate on text datums */
! 		t_sql = CStringGetTextDatum(c_sql);
! 
! 		/*
! 		 * Reduce any lines beginning with \echo to empty.  This allows
! 		 * scripts to contain messages telling people not to run them via
! 		 * psql, which has been found to be necessary due to old habits.
! 		 */
! 		t_sql = DirectFunctionCall4Coll(textregexreplace,
! 		C_COLLATION_OID,
! 		t_sql,
! 		CStringGetTextDatum(^echo.*$),
! 		CStringGetTextDatum(),
! 		CStringGetTextDatum(ng));
  
  		/*
  		 * If it's not relocatable, substitute the target schema name for
  		 * occcurrences of @extschema@.
  		 *
! 		 * For a relocatable extension, we needn't do this.  There cannot be
! 		 * any need for @extschema@, else it wouldn't be relocatable.
  		 */
  		if (!control-relocatable)
  		{
  			const char *qSchemaName = quote_identifier(schemaName);
  
! 			t_sql = DirectFunctionCall3(replace_text,
! 		t_sql,
! 		CStringGetTextDatum(@extschema@),
! 		CStringGetTextDatum(qSchemaName));
  		}
  
  		/*
*** execute_extension_script(Oid extensionOi
*** 883,897 
  		 */
  		if (control-module_pathname)
  		{
! 			sql = text_to_cstring(
!   DatumGetTextPP(
! 			DirectFunctionCall3(replace_text,
! 	CStringGetTextDatum(sql),
! 	  CStringGetTextDatum(MODULE_PATHNAME),
! 			CStringGetTextDatum(control-module_pathname;
  		}
  
! 		execute_sql_string(sql, filename);
  	}
  	PG_CATCH();
  	{
--- 897,912 
  		 */
  		if (control-module_pathname)
  		{
! 			t_sql = DirectFunctionCall3(replace_text,
! 		t_sql,
! 		CStringGetTextDatum(MODULE_PATHNAME),
! 			CStringGetTextDatum(control-module_pathname));
  		}
  
! 		/* And now back to C string */
! 		c_sql = text_to_cstring(DatumGetTextPP(t_sql));
! 
! 		execute_sql_string(c_sql, filename);
  	}
  	PG_CATCH();
  	{

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your 

Re: [HACKERS] pg_ctl restart - behaviour based on wrong instance

2011-10-12 Thread Bruce Momjian
Magnus Hagander wrote:
  I looked over this issue and I don't thinking having pg_ctl restart fall
  back to 'start' is a good solution. ?I am concerned about cases where we
  start a different server without shutting down the old server, for some
  reason. ?When they say 'restart', I think we have to assume they want a
  restart.
 
  What I did do was to document that not backing up postmaster.pid and
  postmaster.opts might help prevent pg_ctl from getting confused.
 
 Should we exclude postmaster.opts from streaming base backups? We
 already exclude postmaster.pid...

Uh, I think so, unless my analysis was wrong.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Magnus Hagander
On Wed, Oct 12, 2011 at 19:36, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Wed, Oct 12, 2011 at 17:40, Tom Lane t...@sss.pgh.pa.us wrote:
 The only thing the \echo approach will cost us is a few more lines of C
 code in execute_extension_script(), and I think it's more than worth
 that given the evident scope of the problem.

 +1.

 PFA, a sample patch for this --- I've only bothered to change one script
 file here, but will of course do the rest if there are no further
 objections.  The technique actually works even better than I expected,
 because of the seemingly nowhere documented fact that \quit in a script
 file doesn't terminate psql, only processing of the script.  So what
 I get is

 regression=# \i ~/postgres/share/extension/cube--1.0.sql
 Use CREATE EXTENSION cube to load this file.
 regression=#

 which is about as good as one could hope for.

Looks great to me.

I guess the failure scenario is if someone has an extension from 9.1.2
and tries to load it into 9.1.1 or earlier, in which case they will
get a syntax error or somehing when trying to run the CREATE EXTENSION
command, right? I doubt that's something worth dealing with - it's a
lot less likely to happen.

We might want to document this for other third-party extension
developers to use as a trick as well?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Wed, Oct 12, 2011 at 19:36, Tom Lane t...@sss.pgh.pa.us wrote:
 PFA, a sample patch for this --- I've only bothered to change one script
 file here, but will of course do the rest if there are no further
 objections.

 I guess the failure scenario is if someone has an extension from 9.1.2
 and tries to load it into 9.1.1 or earlier, in which case they will
 get a syntax error or somehing when trying to run the CREATE EXTENSION
 command, right? I doubt that's something worth dealing with - it's a
 lot less likely to happen.

Hmm, yeah, you're right.  But it doesn't seem like a big problem to me,
certainly not as big as the problem we're trying to solve.

 We might want to document this for other third-party extension
 developers to use as a trick as well?

Yes, I will add something to the docs.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COUNT(*) and index-only scans

2011-10-12 Thread Garick Hamlin
On Wed, Oct 12, 2011 at 03:16:54PM +0100, Greg Stark wrote:
 On Wed, Oct 12, 2011 at 2:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  I think it's overkill, and possibly unpleasantly unstable as well.
  For the initial attack on this we should just have VACUUM and ANALYZE
  count the number of all-visible blocks and store that in pg_class along
  with the tuple-count statistics.  There's no reason to think that this
  will be any worse than the way we deal with dead tuple counts, for
  instance.
 
 So I have a theory.
 
 Assuming you're in a steady-state situation the amount of all-visible
 blocks will fluctuate from a high just after vacuum to a low just
 before the next vacuum. There are other ways a block can be marked
 all-visible but for the most part I would expect the fraction to go
 steadily down until vacuum comes along and cleans things up.
 
 So if vacuum tracked the fraction of blocks marked all-visible
 *before* it processed them and the fraction it marked all-visible
 after processing we have an upper and lower bound. If we knew how long
 it's been since vacuum we could interpolate between those, or we could
 just take the mean, or we could take the lower bound as a conservative
 estimate.
 
  What I suggest as a first cut for that is: simply derate the visibility 
  fraction as the fraction
  of the table expected to be scanned gets smaller.
 
 I think there's a statistically more rigorous way of accomplishing the
 same thing. If you treat the pages we estimate we're going to read as
 a random sample of the population of pages then your expected value is
 the fraction of the overall population that is all-visible but your
 95th percentile confidence interval will be, uh, a simple formula we
 can compute but I don't recall off-hand.

Incidentally, I had a similar idea at PGCon relating to planning...

My idea was to compute not just the cost but the sensitivity
of the cost an estimate for each plan.   The sensitivity is the 
derivate of the cost.  So, if the total cost was n^2 the sensitivity 
would be 2n.

If you picked a tolerance (like 2 standard deviations) of the 
observed distribution.  You could compare the expected cost and the 
expected 'unlucky cost' for plans.  

(Basically, this is parametric error propagation) 

I know very little about the planner...
I don't know how how often it would lead to picking a better
plan (it might not be worth the cost to compute), but it seemed
like an interesting approach to me.

Garick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Andrew Dunstan



On 10/12/2011 02:21 PM, Magnus Hagander wrote:

On Wed, Oct 12, 2011 at 19:36, Tom Lanet...@sss.pgh.pa.us  wrote:


regression=# \i ~/postgres/share/extension/cube--1.0.sql
Use CREATE EXTENSION cube to load this file.
regression=#

which is about as good as one could hope for.

Looks great to me.


Yes, me too.


I guess the failure scenario is if someone has an extension from 9.1.2
and tries to load it into 9.1.1 or earlier, in which case they will
get a syntax error or somehing when trying to run the CREATE EXTENSION
command, right? I doubt that's something worth dealing with - it's a
lot less likely to happen.



As long as we are going to apply it for 9.1 and not wait for 9.2 I don't 
think there will be much problem. I think this is one of the rare cases 
where we should apply a change to the stable release.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_comments (was: Allow \dd to show constraint comments)

2011-10-12 Thread Robert Haas
On Sun, Sep 11, 2011 at 10:11 AM, Josh Kupershmidt schmi...@gmail.com wrote:
 On Sat, Sep 10, 2011 at 7:47 PM, Thom Brown t...@linux.com wrote:
 Just tested this out on current master.  I tried this on every object
 capable of having a comment, and the view reports all of them with the
 correct details.  Doc changes look fine, except for some reason you
 removed a full-stop (period) from after For all other object types,
 this column is zero.

 Thanks for the review. Looks like I got confused about where I was
 whacking around in catalogs.sgml, good catch of a spurious change.
 Fixed patch attached.

So, I think the critical question for this patch is do we want
this?.  Tom didn't like it, and I have to admit I'm somewhat
demoralized by the discovery that we can't make effective use of this
in psql.  On the flip side, rooting through pg_description and
pg_shdescription with home-grown queries is un-fun, and it's not clear
that \dd solves the problem well enough that we don't need anything
else.  On the third hand, Josh's previous batch of changes to clean up
psql's behavior in this area are clearly a huge improvement: you can
now display the comment for nearly anything by running the appropriate
\dfoo command for whatever the object type is.  So ... is this still
a good idea, or should we just forget about it?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 10/12/2011 02:21 PM, Magnus Hagander wrote:
 I guess the failure scenario is if someone has an extension from 9.1.2
 and tries to load it into 9.1.1 or earlier, in which case they will
 get a syntax error or somehing when trying to run the CREATE EXTENSION
 command, right? I doubt that's something worth dealing with - it's a
 lot less likely to happen.

 As long as we are going to apply it for 9.1 and not wait for 9.2 I don't 
 think there will be much problem. I think this is one of the rare cases 
 where we should apply a change to the stable release.

By 9.2 doing this would be rather pointless, likely.  Also, the earlier
we get it in the easier it will be for third-party devs to rely on it
working.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: move dumpUserConfig call in dumpRoles function of pg_dumpall.c

2011-10-12 Thread Phil Sorber
On Mon, Oct 10, 2011 at 11:54 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Aug 4, 2011 at 2:04 PM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Aug 4, 2011 at 1:53 PM, Phil Sorber p...@omniti.com wrote:
 Ok, here is the patch that just moves the ALTER/SET pieces to the end.
 Can we get this included in the next commit fest?

 Yep, just make yourself an account and add it.

 Unfortunately, it doesn't look like anyone ever replied to this
 thread, but Tom posted some thoughts on another thread that seem to me
 to be a serious problem for this patch:

 http://archives.postgresql.org/message-id/13764.1315094...@sss.pgh.pa.us

 I don't see any easy way around that problem, so I'm going to mark
 this patch Returned with Feedback for now.  It strikes me as craziness
 to try to guess which settings we should restore at the beginning and
 which at the end, so I think we need a better idea.  I don't really
 understand why it's not OK to just have pg_dump issue RESET ROLE at
 appropriate points in the process; that seems like it would be
 sufficient and not particularly ugly.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


I am going to remove that patch from the commit fest because we all
agree that it does not solve the problem satisfactorily. I would like
to re-iterate a few points, and submit a new patch.

First off, there are two distinct problems here that we have been
lumping into one. There is the issue of the 'ALTER DATABASE SET ROLE'
and then there is the 'ALTER ROLE SET ROLE' case. The former is the
one that has been causing us so many problems, and the latter is the
one that I really care about.

Also, there are more people that are hitting this issue as well:

http://archives.postgresql.org/pgsql-hackers/2011-02/msg02362.php

My proposal would be to table the discussion about the 'ALTER DATABASE
SET ROLE' case because there seems to be a bit of a philosophical
debate behind this that needs to be sorted out first.

If we focus only on the 'ALTER ROLE' case I think there is a trivial
solution. We already separate the CREATE from the ALTER in a single
loop. We also already separate out the user config in a separate
function called from this same loop. The problem is that the user
config can be dependent upon a later CREATE. So all I suggest is to
move the user config dumping into a new loop afterward so that the
user config ALTER's come after all the other CREATE's and ALTER's. It
amounts to a 7 line change and solves our problem rather elegantly.
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
new file mode 100644
index b5f64e8..ee597d5
*** a/src/bin/pg_dump/pg_dumpall.c
--- b/src/bin/pg_dump/pg_dumpall.c
*** dumpRoles(PGconn *conn)
*** 804,814 
  			 buf, ROLE, rolename);
  
  		fprintf(OPF, %s, buf-data);
- 
- 		if (server_version = 70300)
- 			dumpUserConfig(conn, rolename);
  	}
  
  	PQclear(res);
  
  	fprintf(OPF, \n\n);
--- 804,815 
  			 buf, ROLE, rolename);
  
  		fprintf(OPF, %s, buf-data);
  	}
  
+ 	if (server_version = 70300)
+ 		for (i = 0; i  PQntuples(res); i++)
+ 			dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
+ 
  	PQclear(res);
  
  	fprintf(OPF, \n\n);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dumping roles improvements?

2011-10-12 Thread Josh Berkus
On 10/11/11 9:43 PM, Tom Lane wrote:
 I don't find this terribly convincing.  I can see the rationales for two
 endpoint cases: (1) restore these objects into exactly the same
 ownership/permissions environment that existed before, and (2) restore
 these objects with the absolute minimum of ownership/permissions
 assumptions.  The latter case seems to me to be covered already by
 --no-owner --no-privileges.

But what I'm asking for is (1).  The problem is that the roles don't
ship in the per-database pgdump file.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Object access hooks with arguments support (v1)

2011-10-12 Thread Robert Haas
On Thu, Sep 29, 2011 at 4:52 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 I noticed that the previous revision does not provide any way to inform
 the modules name of foreign server, even if foreign table was created,
 on the OAT_POST_CREATE hook.
 So, I modified the invocation at heap_create_with_catalog to deliver
 this information to the modules.

 Rest of parts were unchanged, except for rebasing to the latest git
 master.

I've never really been totally sanguine with the idea of making object
access hooks take arguments, and all of my general concerns seem to
apply to the way you've set this patch up.  In particular:

1. Type safety goes out the window.  What you're essentially proposing
here is that we should have one hook function can be used for almost
anything at all and can be getting up to 9 arguments of any type
whatsoever.  The hook will need to know how to interpret all of its
arguments depending on the context in which it was called.  The
compiler will be totally unable to do any static type-checking, so
there will be absolutely no warning if the interface is changed
incompatibly on either side.  Instead, you'll probably just crash the
database server at runtime.

2. The particular choice of data being passed to the object access
hooks appears capricious and arbitrary.  Here's an example:

/* Post creation hook for new relation */
-   InvokeObjectAccessHook(OAT_POST_CREATE, RelationRelationId, relid, 0);
+   InvokeObjectAccessHook4(OAT_POST_CREATE,
+
RelationRelationId, relid, 0,
+
PointerGetDatum(new_rel_tup),
+
PointerGetDatum(tupdesc),
+
BoolGetDatum(is_select_into),
+
CStringGetDatum(fdw_srvname));

Now, I am sure that you have good reasons for wanting to pass those
particular things to the object access hook rather than any other
local variable or argument that might happen to be lying around at
this point in the code, but they are not documented.  If someone adds
a new argument to this function, or removes an argument that's being
passed, they will have no idea what to do about this.  Moreover, if
you did document it, I think it would boil down to this is what
sepgsql happens to need, and I don't think that's an acceptable
answer.  We have repeatedly refused to adopt that approach in the
past.

(This particular case is worse than average, because new_rel_tup is
coming from InsertPgClassTuple, which therefore has to be modified,
along with AddNewRelationTuple and index_create, to get the tuple back
up to the call site where, apparently, it is needed.)

I am not exactly sure what the right way to solve this problem is, but
I don't think this is it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dumping roles improvements?

2011-10-12 Thread Andrew Dunstan



On 10/12/2011 12:43 AM, Tom Lane wrote:

Josh Berkusj...@agliodbs.com  writes:

The reason I want to have the dependant roles created as part of a
database dump is so that we can ship around dump files as a single file,
and restore them with a single command.  This is considerably simpler
than the current requirements, which are:
1. pg_dumpall the roles
2. pg_dump the database
3. tar both files
4. ship file
5. untar both files
6. psql the role file
7. pg_restore the database file

I don't find this terribly convincing.  I can see the rationales for two
endpoint cases: (1) restore these objects into exactly the same
ownership/permissions environment that existed before, and (2) restore
these objects with the absolute minimum of ownership/permissions
assumptions.  The latter case seems to me to be covered already by
--no-owner --no-privileges.  Cases in between those endpoints seem
pretty special-purpose, and I don't want to buy into the assumption that
we should fix them by creating a plethora of --do-it-joshs-way switches.
Can we invent something comparable to the --list/--use-list mechanism,
that can handle a range of use cases with a bit more manual effort?




Not easily, that I can think of. The cleanest way I can imagine would be 
to have explicit ROLE objects in the TOC. TWe can easily get a list of 
object owners and turn that into a set of create role statements, 
because owner names are in the metadata, but getting a list of roles 
mentioned in ACL items can only be done by textually analysing them - 
the information just isn't kept anywhere else currently.


I do think there's a case for doing create if not exists role foo (I 
know we don't have that right now) for owners and roles mentioned in 
ACLs. The hair in the ointment here comes when we consider how far to go 
with that. In particular, would we follow role membership recursively?


OTOH, notwithstanding Josh's reasonable need, I'm not sure the ROI here 
is high enough.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dumping roles improvements?

2011-10-12 Thread Andrew Dunstan



On 10/12/2011 03:16 PM, Josh Berkus wrote:

On 10/11/11 9:43 PM, Tom Lane wrote:

I don't find this terribly convincing.  I can see the rationales for two
endpoint cases: (1) restore these objects into exactly the same
ownership/permissions environment that existed before, and (2) restore
these objects with the absolute minimum of ownership/permissions
assumptions.  The latter case seems to me to be covered already by
--no-owner --no-privileges.

But what I'm asking for is (1).  The problem is that the roles don't
ship in the per-database pgdump file.



I think Tom's (1) assumes you already have that environment, not that it 
will be created on the fly by pg_restore.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: move dumpUserConfig call in dumpRoles function of pg_dumpall.c

2011-10-12 Thread Robert Haas
On Wed, Oct 12, 2011 at 2:27 PM, Phil Sorber p...@omniti.com wrote:
 I am going to remove that patch from the commit fest because we all
 agree that it does not solve the problem satisfactorily. I would like
 to re-iterate a few points, and submit a new patch.

 First off, there are two distinct problems here that we have been
 lumping into one. There is the issue of the 'ALTER DATABASE SET ROLE'
 and then there is the 'ALTER ROLE SET ROLE' case. The former is the
 one that has been causing us so many problems, and the latter is the
 one that I really care about.

 Also, there are more people that are hitting this issue as well:

 http://archives.postgresql.org/pgsql-hackers/2011-02/msg02362.php

 My proposal would be to table the discussion about the 'ALTER DATABASE
 SET ROLE' case because there seems to be a bit of a philosophical
 debate behind this that needs to be sorted out first.

 If we focus only on the 'ALTER ROLE' case I think there is a trivial
 solution. We already separate the CREATE from the ALTER in a single
 loop. We also already separate out the user config in a separate
 function called from this same loop. The problem is that the user
 config can be dependent upon a later CREATE. So all I suggest is to
 move the user config dumping into a new loop afterward so that the
 user config ALTER's come after all the other CREATE's and ALTER's. It
 amounts to a 7 line change and solves our problem rather elegantly.

I'm not sure I completely follow this explanation of the problem, but
it does seem better to me to set all the role attributes after dumping
all the create role statements, and I don't see how that can break
anything that works now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dumping roles improvements?

2011-10-12 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 10/11/11 9:43 PM, Tom Lane wrote:
 I don't find this terribly convincing.  I can see the rationales for two
 endpoint cases: (1) restore these objects into exactly the same
 ownership/permissions environment that existed before, and (2) restore
 these objects with the absolute minimum of ownership/permissions
 assumptions.  The latter case seems to me to be covered already by
 --no-owner --no-privileges.

 But what I'm asking for is (1).  The problem is that the roles don't
 ship in the per-database pgdump file.

In that case you do pg_dumpall -r first and then pg_dump your
individual database.  I thought you were looking for something that
would dump only roles referenced in the particular database, which
is why it sounded like an intermediate case.

I know that the division of labor between pg_dumpall and pg_dump could
use rethinking, but it needs to be actually rethought, in toto, not
hacked one minor feature at a time.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: move dumpUserConfig call in dumpRoles function of pg_dumpall.c

2011-10-12 Thread Phil Sorber
On Wed, Oct 12, 2011 at 3:48 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Oct 12, 2011 at 2:27 PM, Phil Sorber p...@omniti.com wrote:
 I am going to remove that patch from the commit fest because we all
 agree that it does not solve the problem satisfactorily. I would like
 to re-iterate a few points, and submit a new patch.

 First off, there are two distinct problems here that we have been
 lumping into one. There is the issue of the 'ALTER DATABASE SET ROLE'
 and then there is the 'ALTER ROLE SET ROLE' case. The former is the
 one that has been causing us so many problems, and the latter is the
 one that I really care about.

 Also, there are more people that are hitting this issue as well:

 http://archives.postgresql.org/pgsql-hackers/2011-02/msg02362.php

 My proposal would be to table the discussion about the 'ALTER DATABASE
 SET ROLE' case because there seems to be a bit of a philosophical
 debate behind this that needs to be sorted out first.

 If we focus only on the 'ALTER ROLE' case I think there is a trivial
 solution. We already separate the CREATE from the ALTER in a single
 loop. We also already separate out the user config in a separate
 function called from this same loop. The problem is that the user
 config can be dependent upon a later CREATE. So all I suggest is to
 move the user config dumping into a new loop afterward so that the
 user config ALTER's come after all the other CREATE's and ALTER's. It
 amounts to a 7 line change and solves our problem rather elegantly.

 I'm not sure I completely follow this explanation of the problem, but
 it does seem better to me to set all the role attributes after dumping
 all the create role statements, and I don't see how that can break
 anything that works now.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


Just to be clear, this doesn't move all the ALTER's. It will still do
CREATE/ALTER pair's for attributes that could go in the CREATE.

Example:

CREATE ROLE bob;
ALTER ROLE bob WITH LOGIN PASSWORD 'blah';

You could turn those in to one statement, but for various reasons you
do not want to. None of these have any dependencies other than the
actual creation of the role.

Then there is a separate section of code that is called as a separate
function 'dumpUserConfig()' that does other role attributes like
'ALTER ROLE bob SET role TO charlie'. These are the ALTER's that can
have dependencies on other roles.

I agree this is a little confusing, and if you prefer to see all the
ALTER's in one section together directly after the CREATE statements I
can make the patch do that, but it isn't necessary to solve the
problem.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] branching for 9.2devel

2011-10-12 Thread Bruce Momjian
Alvaro Herrera wrote:
 Excerpts from Tom Lane's message of lun abr 25 20:48:42 -0300 2011:
  Andrew Dunstan and...@dunslane.net writes:
 
   Well, that way you'll have a handful of -Ttypdef parameters for each 
   invocation of indent instead of a gazillion of them. No more command 
   line length issues.
  
  Well, -Ttypedef is wrong on its face.  Right would be a switch
  specifying the name of the file to read the typedef list from.
  Then you don't need massive script-level infrastructure to try
  to spoonfeed that data to the program doing the work.
 
 I gather that Andrew will be working on replacing the pgindent shell
 script with a Perl script, but this new script will still rely on our
 patched BSD indent, right?
 
 Of course, it would make sense to further patch indent so that it
 accepts typedefs in a file instead of thousands of -T switches, but that
 seems orthogonal.

I have done as you suggested, modifying our version of BSD indent to
allow a file of typedefs to be processed.  I also renamed the download
and binary to 'pg_bsd_indent' so it can be installed on a system that
already has 'indent'.  It should propogate to the ftp mirrors in a few
hours.  Even after we go to Perl, this is still a necessary change.

I have modified the pgindent script to use this new flag, and applied
those changes, attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/tools/pgindent/README b/src/tools/pgindent/README
new file mode 100644
index e81e85d..7504650
*** a/src/tools/pgindent/README
--- b/src/tools/pgindent/README
*** pgindent
*** 6,31 
  This can format all PostgreSQL *.c and *.h files, but excludes *.y, and
  *.l files.
  
! 1) Change directory to the top of the build tree.
  
! 2) Download the typedef file from the buildfarm:
  
  	wget -O src/tools/pgindent/typedefs.list http://buildfarm.postgresql.org/cgi-bin/typedefs.pl
  
! 3) Remove all derived files (pgindent has trouble with one of the flex macros):
  
  	gmake maintainer-clean
  
! 4) Run pgindent:
  
  	find . -name '*.[ch]' -type f -print | \
  	egrep -v -f src/tools/pgindent/exclude_file_patterns | \
  	xargs -n100 pgindent src/tools/pgindent/typedefs.list
  
! 5) Remove any files that generate errors and restore their original
 versions.
  
! 6) Do a full test build:
  
  	run configure
  	# stop is only necessary if it's going to install in a location with an
--- 6,33 
  This can format all PostgreSQL *.c and *.h files, but excludes *.y, and
  *.l files.
  
! 1) Install pg_bsd_indent (see below for details)
  
! 2) Change directory to the top of the build tree.
! 
! 3) Download the typedef file from the buildfarm:
  
  	wget -O src/tools/pgindent/typedefs.list http://buildfarm.postgresql.org/cgi-bin/typedefs.pl
  
! 4) Remove all derived files (pgindent has trouble with one of the flex macros):
  
  	gmake maintainer-clean
  
! 5) Run pgindent:
  
  	find . -name '*.[ch]' -type f -print | \
  	egrep -v -f src/tools/pgindent/exclude_file_patterns | \
  	xargs -n100 pgindent src/tools/pgindent/typedefs.list
  
! 6) Remove any files that generate errors and restore their original
 versions.
  
! 7) Do a full test build:
  
  	run configure
  	# stop is only necessary if it's going to install in a location with an
*** This can format all PostgreSQL *.c and *
*** 38,54 
  
  ---
  
! We have standardized on NetBSD's indent.  We have fixed a few bugs which
! requre the NetBSD source to be patched with indent.bsd.patch patch.  A
! fully patched version is available at ftp://ftp.postgresql.org/pub/dev.
  
  GNU indent, version 2.2.6, has several problems, and is not recommended.
  These bugs become pretty major when you are doing 500k lines of code.
  If you don't believe me, take a directory and make a copy.  Run pgindent
  on the copy using GNU indent, and do a diff -r. You will see what I
! mean. GNU indent does some things better, but mangles too.
  
! Notes about excluded files:
  
  src/include/storage/s_lock.h is excluded because it contains assembly code
  that pgindent tends to mess up.
--- 40,67 
  
  ---
  
! BSD indent
! --
! 
! We have standardized on NetBSD's indent, and renamed it pg_bsd_indent. 
! We have fixed a few bugs which requre the NetBSD source to be patched
! with indent.bsd.patch patch.  A fully patched version is available at
! ftp://ftp.postgresql.org/pub/dev.
  
  GNU indent, version 2.2.6, has several problems, and is not recommended.
  These bugs become pretty major when you are doing 500k lines of code.
  If you don't believe me, take a directory and make a copy.  Run pgindent
  on the copy using GNU indent, and do a diff -r. You will see what I
! mean. GNU indent 

Re: [HACKERS] Formatting Curmudgeons WAS: MMAP Buffers

2011-10-12 Thread Bruce Momjian
Andrew Dunstan wrote:
 Now we could certainly make this quite a bit slicker. Apart from 
 anything else, we should change the indent source code tarball so it 
 unpacks into its own directory. Having it unpack into the current 

Yes, done.

 directory is ugly and unfriendly. And we should get rid of the make 
 clean line in the install target of entab's makefile, which just seems 
 totally ill-conceived.

Yes, fixed.

 It might also be worth setting it up so that instead of having to pass a 
 path to a typedefs file on the command line, we default to a file 
 sitting in, say, /usr/local/etc. Then you'd just be able to say 
 pgindent my_file.c.

Yes, also done.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgindent weirdness

2011-10-12 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Tom Lane wrote:
  Now having said that, there seems to be a pgindent bug here too, in that
  it's throwing a space into
  
  Buffer
  RelationGetBufferForTuple(Relation relation, Size len,
  Buffer otherBuffer, int options,
  struct BulkInsertStateData * bistate)
  
  Whether BulkInsertStateData is flagged as a typedef or not, surely it
  ought to understand that struct BulkInsertStateData is a type name.
 
  Uh, I think we have this listed as a known bug at the top of the
  pgindent script:
 
 Hm.  I guess the third observation is that in the current state of the
 code, there's no very good reason to be using struct in
 RelationGetBufferForTuple's prototype anyway, since the typedef is
 declared right above it.  Maybe we should just change that and not
 risk fooling with pgindent.

Changed as you suggested.  I didn't see any other obvious cases.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
new file mode 100644
index 26db1e3..beecc90
*** a/src/backend/access/heap/hio.c
--- b/src/backend/access/heap/hio.c
*** GetVisibilityMapPins(Relation relation, 
*** 213,219 
  Buffer
  RelationGetBufferForTuple(Relation relation, Size len,
  		  Buffer otherBuffer, int options,
! 		  struct BulkInsertStateData * bistate,
  		  Buffer *vmbuffer, Buffer *vmbuffer_other)
  {
  	bool		use_fsm = !(options  HEAP_INSERT_SKIP_FSM);
--- 213,219 
  Buffer
  RelationGetBufferForTuple(Relation relation, Size len,
  		  Buffer otherBuffer, int options,
! 		  BulkInsertState bistate,
  		  Buffer *vmbuffer, Buffer *vmbuffer_other)
  {
  	bool		use_fsm = !(options  HEAP_INSERT_SKIP_FSM);
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
new file mode 100644
index 6eac97e..aefd679
*** a/src/include/access/hio.h
--- b/src/include/access/hio.h
*** extern void RelationPutHeapTuple(Relatio
*** 38,44 
  	 HeapTuple tuple);
  extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
  		  Buffer otherBuffer, int options,
! 		  struct BulkInsertStateData * bistate,
  		  Buffer *vmbuffer, Buffer *vmbuffer_other);
  
  #endif   /* HIO_H */
--- 38,44 
  	 HeapTuple tuple);
  extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
  		  Buffer otherBuffer, int options,
! 		  BulkInsertState bistate,
  		  Buffer *vmbuffer, Buffer *vmbuffer_other);
  
  #endif   /* HIO_H */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread Tom Lane
David E. Wheeler da...@kineticode.com writes:
 On Oct 12, 2011, at 9:15 AM, Tom Lane wrote:
 Well, the real question is why a function declared to return VOID cares
 at all about what the last command in its body is.  If this has changed
 since previous versions then I think it's a bug and we should fix it,
 not just change the example.

 It has indeed changed, either in 9.0 or 9.1 (the latter, I think). I had to 
 add a bunch of bare return statements to existing functions.

This appears to have gotten broken in commit
87bb2ade2ce646083f39d5ab3e3307490211ad04, which changed the function
return code to go through plperl_sv_to_datum, which is making
unwarranted assumptions ... but since it's utterly bereft of comments,
it's hard for a non Perl hacker to identify exactly what it should do
instead.  The core of the problem seems to be that if SvROK(sv) then
the code assumes that it must be intended to convert that to an array or
composite, no matter whether the declared result type of the function is
compatible with such a thing.  So I think this probably broke not only
VOID-result cases, but also cases where a Perl array or hash is supposed
to be returned as, say, text.  It would be more appropriate to drive the
cases off the nature of the function result type, perhaps.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER EXTENSION .. ADD/DROP weirdness

2011-10-12 Thread Robert Haas
On Mon, Oct 10, 2011 at 2:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 But there's a bigger problem: it seems to me that we have an
 inconsistency between what happens when you create an extension from
 scratch and when you upgrade it from unpackaged.  Both pg_buffercache
 and pg_stat_statements just do this in the upgrade from unpackaged
 case:

 ALTER EXTENSION ext-name ADD view view-name;

 They do *not* add the type and the array type.  But when the 1.0
 script is run, the type and array type end up belonging to the
 extension.  This seems bad.

 Hmm, yeah, we need to make those consistent.

 The underlying issue here is whether objects dependent on an extension
 member should have direct dependencies on the extension too, and if not,
 how do we prevent that?  The recordDependencyOnCurrentExtension calls
 don't have enough information to know what to do, I think.

After looking at this code, it seems that we've generally made that
the caller's problem - e.g. in heap_create_with_catalog(), we skip
recordDependencyOnCurrentExtension() if we're dealing with a composite
type.  So I think the fix here is just to move the
recordDependencyOnCurrentExtension() call in pg_type.c inside the
if-block that precedes it, as in the attached patch.

Of course, this won't fix any damage already done, but it seems like
the right thing going forward.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


extension-type-dependencies.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] *.sql contrib files contain unresolvable MODULE_PATHNAME

2011-10-12 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 regression=# \i ~/postgres/share/extension/cube--1.0.sql
 Use CREATE EXTENSION cube to load this file.
 regression=# 

Great work, thank you!
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dumping roles improvements?

2011-10-12 Thread Josh Berkus

 In that case you do pg_dumpall -r first and then pg_dump your
 individual database.  I thought you were looking for something that
 would dump only roles referenced in the particular database, which
 is why it sounded like an intermediate case.
 
 I know that the division of labor between pg_dumpall and pg_dump could
 use rethinking, but it needs to be actually rethought, in toto, not
 hacked one minor feature at a time.

Sure.  Maybe I should start a wiki page for that?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER EXTENSION .. ADD/DROP weirdness

2011-10-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Oct 10, 2011 at 2:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The underlying issue here is whether objects dependent on an extension
 member should have direct dependencies on the extension too, and if not,
 how do we prevent that?  The recordDependencyOnCurrentExtension calls
 don't have enough information to know what to do, I think.

 After looking at this code, it seems that we've generally made that
 the caller's problem - e.g. in heap_create_with_catalog(), we skip
 recordDependencyOnCurrentExtension() if we're dealing with a composite
 type.  So I think the fix here is just to move the
 recordDependencyOnCurrentExtension() call in pg_type.c inside the
 if-block that precedes it, as in the attached patch.

Hmm.  I'm afraid that's going to break something, because I had had it
like that originally and changed it in commit
988620dd8c16d77f88ede167b22056176324.  However, I'm not quite sure
*what* it will break, because it seems like in general extension
dependencies ought to act pretty nearly like owner dependencies.
In a quick look, this seems to be the only place where we're doing it
differently (without a clear reason) for recordDependencyOnOwner and
recordDependencyOnCurrentExtension.

Let me poke at it a bit more.  The proposed patch is a bit short on
comment fixes, anyway.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhead cost of Serializable Snapshot Isolation

2011-10-12 Thread Simon Riggs
On Wed, Oct 12, 2011 at 1:44 PM, Florian Pflug f...@phlo.org wrote:
 On Oct11, 2011, at 23:35 , Simon Riggs wrote:
 On Tue, Oct 11, 2011 at 10:30 PM, Florian Pflug f...@phlo.org wrote:

 That experience has taught me that backwards compatibility, while very
 important in a lot of cases, has the potential to do just as much harm
 if overdone.

 Agreed. Does my suggestion represent overdoing it? I ask for balance,
 not an extreme.

 It's my belief that an off switch for true serializability is overdoing
 it, yes.

 With such a switch, every application that relies on true serializability
 for
 correctness would be prone to silent data corruption should the switch ever
 get set to off accidentally.

 Without such a switch, OTOH, all that will happen are a few more aborts due
 to
 serialization errors in application who request SERIALIZABLE when they
 really
 only need REPEATABLE READ. Which, in the worst case, is a performance issue,
 but never an issue of correctness.

That's a good argument and I accept it.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ts_rank

2011-10-12 Thread Bruce Momjian
Bruce Momjian wrote:
 Mark wrote:
  There's some potentially useful information here:
  http://www.postgresql.org/docs/9.0/interactive/textsearch-controls.html#TEXTSEARCH-RANKING
  
  Thanks for reply. I was reading the documentation of PostgreSQL, but there
  it is not written the name of the used methods. Everywhere there is written,
  that ts_rank use standard ranking function. But it is difficult to say which
  is the standard function. 
  Somewhere I found that it is maybe based on Vector space model and it seems
  to be truth, because in the code of tsrank.c is counted the frequency of
  words, but I am not sure of that :-(
 
 Oleg, Teodor, can you give me a description of how ts_rank decided how
 to rank items?  Thanks.

Any news on this question?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread Alex Hunsaker
On Wed, Oct 12, 2011 at 15:00, Tom Lane t...@sss.pgh.pa.us wrote:
 David E. Wheeler da...@kineticode.com writes:
 On Oct 12, 2011, at 9:15 AM, Tom Lane wrote:
 Well, the real question is why a function declared to return VOID cares
 at all about what the last command in its body is.  If this has changed
 since previous versions then I think it's a bug and we should fix it,
 not just change the example.

 It has indeed changed, either in 9.0 or 9.1 (the latter, I think). I had to 
 add a bunch of bare return statements to existing functions.

[ For those that missed it, 9.0 is OK, it is indeed a bug in 9.1. ]

 This appears to have gotten broken in commit
 87bb2ade2ce646083f39d5ab3e3307490211ad04, which changed the function
 return code to go through plperl_sv_to_datum, which is making
 unwarranted assumptions ... but since it's utterly bereft of comments,
 it's hard for a non Perl hacker to identify exactly what it should do
 instead.

Yeah, its a mess.

 The core of the problem seems to be that if SvROK(sv) then
 the code assumes that it must be intended to convert that to an array or
 composite, no matter whether the declared result type of the function is
 compatible with such a thing.

Hrm, well 9.0 and below did not get this right either:
create or replace function test_hash() returns text as $$ return
{'a'=1}; $$ language plperl;
select test_array();
  test_array
---
 ARRAY(0x7fd92384dcb8)
(1 row)

create or replace function test_hash() returns text as $$ return
{'a'=1}; $$ language plperl;
select test_hash();
  test_hash
--
 HASH(0x7fd92387f848)
(1 row)


9.1 does this:
select test_array();
 test_array

 \x01
(1 row)

select test_hash();
ERROR:  type text is not composite
CONTEXT:  PL/Perl function test_hash

  So I think this probably broke not only
 VOID-result cases, but also cases where a Perl array or hash is supposed
 to be returned as, say, text.

Given the output above (both pre 9.1 and post) it seems unless the
type is a set or composite we should throw an error. Maybe PL/Perl
function returning type %s must not return a reference ?

  It would be more appropriate to drive the
 cases off the nature of the function result type, perhaps.

Ill see if I can cook up something that's not too invasive.

[ I have a patch to fix the VOID issues. it gets rid of that horrid
has_retval variable/logic and makes it look much closer to 9.0's code.
Unfortunately its on my laptop at home as I was hacking on it before I
went to work... ]

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER EXTENSION .. ADD/DROP weirdness

2011-10-12 Thread Tom Lane
I wrote:
 Hmm.  I'm afraid that's going to break something, because I had had it
 like that originally and changed it in commit
 988620dd8c16d77f88ede167b22056176324.  However, I'm not quite sure
 *what* it will break, because it seems like in general extension
 dependencies ought to act pretty nearly like owner dependencies.
 In a quick look, this seems to be the only place where we're doing it
 differently (without a clear reason) for recordDependencyOnOwner and
 recordDependencyOnCurrentExtension.

After studying the code a bit more, I think I was worrying about some
corner cases involving shell type replacement; but they're not
interesting enough to justify making the main-line cases harder to work
with.  So I think this is a good fix, and I applied it with some comment
adjustments.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: move dumpUserConfig call in dumpRoles function of pg_dumpall.c

2011-10-12 Thread Tom Lane
Phil Sorber p...@omniti.com writes:
 Then there is a separate section of code that is called as a separate
 function 'dumpUserConfig()' that does other role attributes like
 'ALTER ROLE bob SET role TO charlie'. These are the ALTER's that can
 have dependencies on other roles.

Right.  Phrased that way, this is an obvious improvement, and I concur
that it doesn't look like it could break anything that works now.
The more general problem remains to be solved, but we might as well
apply this.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-10-12 Thread David E. Wheeler
On Sep 15, 2011, at 3:04 PM, Alex Hunsaker wrote:

 Close, seems I was wrong about the typemap ExtUtils::ParseXS does not
 install a new one so we still need to point to the one in privlib.
 Also xsubpp is not executable so the test should be -r or something.
 
 Also don't think we should change the configure switch tests to test 
 XSUBPPDIR.
 
 Find those plus some minor typos fixed in the attached.
 xsubpp_v3.patch
 -- 

Doesn't look like this has been applied yet. I think it ought to be backported, 
too, frankly. DId I miss it?

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-10-12 Thread Alex Hunsaker
On Wed, Oct 12, 2011 at 17:53, David E. Wheeler da...@kineticode.com wrote:
 On Sep 15, 2011, at 3:04 PM, Alex Hunsaker wrote:

 Close, seems I was wrong about the typemap ExtUtils::ParseXS does not
 install a new one so we still need to point to the one in privlib.
 Also xsubpp is not executable so the test should be -r or something.

 Also don't think we should change the configure switch tests to test 
 XSUBPPDIR.

 Find those plus some minor typos fixed in the attached.
 xsubpp_v3.patch
 --

 Doesn't look like this has been applied yet. I think it ought to be 
 backported, too, frankly. DId I miss it?

Nah, probably should add it to the next commit fest so it does not get
forgotten.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] loss of transactions in streaming replication

2011-10-12 Thread Fujii Masao
On Wed, Oct 12, 2011 at 10:29 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Oct 12, 2011 at 5:45 AM, Fujii Masao masao.fu...@gmail.com wrote:
 In 9.2dev and 9.1, when walreceiver detects an error while sending data to
 WAL stream, it always emits ERROR even if there are data available in the
 receive buffer. This might lead to loss of transactions because such
 remaining data are not received by walreceiver :(

 Won't it just reconnect?

Yes if the master is running normally. OTOH, if the master is not running (i.e.,
failover case), the standby cannot receive again the data which it failed to
receive.

I found this issue when I shut down the master. When the master shuts down,
it sends the shutdown checkpoint record, but I found that the standby failed
to receive it.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_comments (was: Allow \dd to show constraint comments)

2011-10-12 Thread Josh Kupershmidt
On Wed, Oct 12, 2011 at 2:49 PM, Robert Haas robertmh...@gmail.com wrote:
 So, I think the critical question for this patch is do we want
 this?.

Yep. Or put another way, are the gains worth having another system
view we'll have to maintain forever?

 Tom didn't like it,

In [1], Tom seemed to be mainly angling for fixing up psql instead,
which has been done now. I didn't see a specific reason against adding
the view, other than it cannot be changed without an initdb. That's
a valid concern of course, but it applies equally well to other system
views.

[snip]
 On the third hand, Josh's previous batch of changes to clean up
 psql's behavior in this area are clearly a huge improvement: you can
 now display the comment for nearly anything by running the appropriate
 \dfoo command for whatever the object type is.  So ... is this still
 a good idea, or should we just forget about it?

I think this question is a part of a broader concern, namely do we
want to create and support system views for easier access to
information which is already available in different ways through psql
commands, or by manually digging around in the catalogs? I believe
there are at least several examples of existing views we maintain
which are very similar to pg_comments: pg_seclabel seems quite
similar, for instance.

Josh

--
[1] http://archives.postgresql.org/pgsql-hackers/2010-09/msg01081.php

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-10-12 Thread Josh Berkus
Simon,

I haven't see a response from you on a proposed way to keep backwards
compatibility with recovery.conf as a trigger file, while also
eliminating its trigger status as an unmanagable misfeature.  As far as
I can tell, that's the one area where we *cannot* maintain backwards
compatibility.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] make_greater_string() does not return a string in some cases

2011-10-12 Thread Kyotaro HORIGUCHI
Hello, the work is finished.

 Version 4 of the patch is attached to this message.

 - Add rough description of the algorithm as comment to
   pg_utf8_increment() and pg_eucjp_increment().

 - Fixed a bug of pg_utf8_increment() found while
   working. pg_(utf8|eucjp)_increment are retested on whole valid
   code points to be properly handled.

 - The comment previously pointed out as being wrong in grammar
   is left untouched. I'm sorry to bother you with my poor
   English.


At Tue, 11 Oct 2011 16:55:00 +0900 (JST), Kyotaro HORIGUCHI 
horiguchi.kyot...@oss.ntt.co.jp wrote 
One thing I still think it would be useful to add,
  though, is some comments to pg_utf8_increment() and
  pg_eucjp_increment() describing the algorithm being used.  Can you
  take a crack at that?
 
  Yes I'll do it in a day or two.

 Regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 8ceea82..59f8c37 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -5664,6 +5664,19 @@ pattern_selectivity(Const *patt, Pattern_Type ptype)
 
 
 /*
+ * This function is character increment function for bytea used in
+ * make_greater_string() that has same interface with pg_wchar_tbl.charinc.
+ */
+static bool
+byte_increment(unsigned char *ptr, int len)
+{
+	if (*ptr = 255) return false;
+
+	(*ptr)++;
+	return true;
+}
+
+/*
  * Try to generate a string greater than the given string or any
  * string it is a prefix of.  If successful, return a palloc'd string
  * in the form of a Const node; else return NULL.
@@ -5702,6 +5715,7 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation)
 	int			len;
 	Datum		cmpstr;
 	text	   *cmptxt = NULL;
+	character_incrementer charincfunc;
 
 	/*
 	 * Get a modifiable copy of the prefix string in C-string format, and set
@@ -5763,29 +5777,33 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation)
 		}
 	}
 
+	if (datatype == BYTEAOID)
+		charincfunc = byte_increment;
+	else
+		charincfunc = pg_database_encoding_character_incrementer();
+
 	while (len  0)
 	{
-		unsigned char *lastchar = (unsigned char *) (workstr + len - 1);
-		unsigned char savelastchar = *lastchar;
+		int charlen = 1;
+		unsigned char *lastchar;
+		Const	   *workstr_const;
+		
+		if (datatype != BYTEAOID)
+			charlen = len - pg_mbcliplen(workstr, len, len - 1);
+		
+		lastchar = (unsigned char *) (workstr + len - charlen);
 
 		/*
-		 * Try to generate a larger string by incrementing the last byte.
+		 * Try to generate a larger string by incrementing the last byte or
+		 * character.
 		 */
-		while (*lastchar  (unsigned char) 255)
-		{
-			Const	   *workstr_const;
-
-			(*lastchar)++;
 
-			if (datatype != BYTEAOID)
-			{
-/* do not generate invalid encoding sequences */
-if (!pg_verifymbstr(workstr, len, true))
-	continue;
-workstr_const = string_to_const(workstr, datatype);
-			}
-			else
+		if (charincfunc(lastchar, charlen))
+		{
+			if (datatype == BYTEAOID)
 workstr_const = string_to_bytea_const(workstr, len);
+			else
+workstr_const = string_to_const(workstr, datatype);
 
 			if (DatumGetBool(FunctionCall2Coll(ltproc,
 			   collation,
@@ -5804,20 +5822,10 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation)
 			pfree(workstr_const);
 		}
 
-		/* restore last byte so we don't confuse pg_mbcliplen */
-		*lastchar = savelastchar;
-
 		/*
-		 * Truncate off the last character, which might be more than 1 byte,
-		 * depending on the character encoding.
+		 * Truncate off the last character or byte.
 		 */
-		if (datatype != BYTEAOID  pg_database_encoding_max_length()  1)
-			len = pg_mbcliplen(workstr, len, len - 1);
-		else
-			len -= 1;
-
-		if (datatype != BYTEAOID)
-			workstr[len] = '\0';
+		len -= charlen;
 	}
 
 	/* Failed... */
diff --git a/src/backend/utils/mb/wchar.c b/src/backend/utils/mb/wchar.c
index f23732f..a213636 100644
--- a/src/backend/utils/mb/wchar.c
+++ b/src/backend/utils/mb/wchar.c
@@ -1336,6 +1336,264 @@ pg_utf8_islegal(const unsigned char *source, int length)
 
 /*
  *---
+ * character incrementer
+ *
+ * These functions accept charptr, a pointer to the first byte of a
+ * maybe-multibyte character. Try `increment' the character and return
+ * true if successed.  If these functions returns false, the character
+ * should be untouched.  These functions must be implemented in
+ * correspondence with verifiers, in other words, the rewrited
+ * character by this function must pass the check by pg_*_verifier()
+ * if returns true.
+ * ---
+ */
+
+#ifndef FRONTEND
+static bool
+pg_generic_charinc(unsigned char *charptr, int len)
+{
+ 	unsigned char *lastchar = (unsigned char *) (charptr + len - 1);
+ 	unsigned char savelastchar = *lastchar;
+ 	const char 

Re: [HACKERS] pl/perl example in the doc no longer works in 9.1

2011-10-12 Thread Alex Hunsaker
On Wed, Oct 12, 2011 at 15:33, Alex Hunsaker bada...@gmail.com wrote:
 On Wed, Oct 12, 2011 at 15:00, Tom Lane t...@sss.pgh.pa.us wrote:

 The core of the problem seems to be that if SvROK(sv) then
 the code assumes that it must be intended to convert that to an array or
 composite, no matter whether the declared result type of the function is
 compatible with such a thing.

 Hrm, well 9.0 and below did not get this right either:
 create or replace function test_hash() returns text as $$ return
 {'a'=1}; $$ language plperl;
 select test_array();
      test_array
 ---
  ARRAY(0x7fd92384dcb8)
 (1 row)

 create or replace function test_hash() returns text as $$ return
 {'a'=1}; $$ language plperl;
 select test_hash();
      test_hash
 --
  HASH(0x7fd92387f848)
 (1 row)


 Given the output above (both pre 9.1 and post) it seems unless the
 type is a set or composite we should throw an error. Maybe PL/Perl
 function returning type %s must not return a reference ?

  It would be more appropriate to drive the
 cases off the nature of the function result type, perhaps.

 Ill see if I can cook up something that's not too invasive.

PFA my attempt at a fix.

This gets rid of of most of the if/else chain and the has_retval crap
in plperl_handl_func(). Instead we let plperl_sv_to_datum() do most of
the lifting. It also now handles VOIDOID and checks that the request
result oid can be converted from the perl structure. For example if
you passed in a hashref with a result oid that was not an rowtype it
will error out with PL/Perl cannot convert hash to non rowtype %s.
Arrays behave similarly.

One side effect is you can now return a composite literal where you
could not before. ( We already let you return array literals )

The comments might still be a bit sparse-- Im hoping the added errors
make things a bit more self explanatory.

A large portion of the diff is added regression tests, testing what
happens when you return various references.

Comments?


plperl_returns.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online base backup from the hot-standby

2011-10-12 Thread Jun Ishiduka

   ERROR: full_page_writes on master is set invalid at least once since
   latest checkpoint
  
   I think this error should be rewritten as
   ERROR: full_page_writes on master has been off at some point since
   latest checkpoint
  
   We should be using 'off' instead of 'invalid' since that is what is what
   the user sets it to.
 
  Sure.
 
 What about the following message? It sounds more precise to me.
 
 ERROR: WAL generated with full_page_writes=off was replayed since last
 restartpoint

Okay, I changes the patch to this messages.
If someone says there is a idea better than it, I will consider again.


  I updated to patch corresponded above-comments.
 
 Thanks for updating the patch! Here are the comments:
 
* don't yet have the insert lock, forcePageWrites could change under 
 us,
* but we'll recheck it once we have the lock.
*/
 - doPageWrites = fullPageWrites || Insert-forcePageWrites;
 + doPageWrites = Insert-fullPageWrites || Insert-forcePageWrites;
 
 The source comment needs to be modified.

* just turned off, we could recompute the record without full pages, 
 but
* we choose not to bother.)
*/
 - if (Insert-forcePageWrites  !doPageWrites)
 + if ((Insert-fullPageWrites || Insert-forcePageWrites)  
 !doPageWrites)
 
 Same as above.

Sure.


 XLogReportParameters() should skip writing WAL if full_page_writes has not 
 been
 changed by SIGHUP.
 
 XLogReportParameters() should skip updating pg_control if any parameter 
 related
 to hot standby has not been changed.

YES.


 In checkpoint, XLogReportParameters() is called only when wal_level is
 hot_standby.
 OTOH, in walwriter, it's always called even when wal_level is not hot_standby.
 Can't we skip calling XLogReportParameters() whenever wal_level is not
 hot_standby?

Yes, It is possible.


 In do_pg_start_backup() and do_pg_stop_backup(), the spinlock must be held to
 see XLogCtl-lastFpwDisabledLSN.

Yes.


 What about changing the error message to:
 ERROR: WAL generated with full_page_writes=off was replayed during online 
 backup

Okay, too.



Jun Ishizuka
NTT Software Corporation
TEL:045-317-7018
E-Mail: ishizuka@po.ntts.co.jp




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online base backup from the hot-standby

2011-10-12 Thread Jun Ishiduka

Sorry.
I was not previously able to answer fujii's all comments.
This is the remaining answers.


 + LWLockAcquire(WALInsertLock, LW_EXCLUSIVE);
 + XLogCtl-Insert.fullPageWrites = fullPageWrites;
 + LWLockRelease(WALInsertLock);
 
 I don't think WALInsertLock needs to be hold here because there is no
 concurrently running process which can access Insert.fullPageWrites.
 For example, Insert-currpos and Insert-LogwrtResult are also changed
 without the lock there.
 

Yes. 

 The source comment of XLogReportParameters() needs to be modified.

Yes, too.



Jun Ishizuka
NTT Software Corporation
TEL:045-317-7018
E-Mail: ishizuka@po.ntts.co.jp




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Commitfest app not exporting moved to another commitfest to rss

2011-10-12 Thread Brar Piening
I use rss to follow up on patches that I'm interested in and  it's the 
second time I was wonering where my patch has gone in the commitfest app 
due to $Topic.


Is this a known limitation?
If yes: Is there a way to change this?
If yes: Can/shall I help?
If yes: Where should I start?

Regards,

Brar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest app not exporting moved to another commitfest to rss

2011-10-12 Thread Brar Piening

Brar Piening wrote:
I use rss to follow up on patches that I'm interested in and  it's the 
second time I was wonering where my patch has gone in the commitfest 
app due to $Topic.


Just after pushing the send button my RSS-feed got updated and contained 
the relevant information.

Sorry for the noise!

I don't actually understand the delay in my feed getting updated despite 
the fact that I'm regularly updating it on my side.


Regards,

Brar

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers