Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-12 Thread Fujii Masao
On Sat, Sep 10, 2011 at 12:18 AM, Simon Riggs  wrote:
> On Fri, Sep 9, 2011 at 3:05 PM, Tom Lane  wrote:
>> Magnus Hagander  writes:
>>> I have to wonder though, if it wouldn't be less confusing to just get
>>> rid of recovery.conf and use a *different* file for this. Just to make
>>> it clear it's not a config file, but just a boolean exists/notexists
>>> state.
>>
>> +1.  If it's not a configuration file anymore, it shouldn't be called
>> one.
>
> +1 to rename file
>
> +1 to overall concept, just thinking same myself, not looked at patch yet

Are you still thinking the backward-compatibility (i.e., the
capability to specify
recovery parameters in recovery.conf) is required?

Even if we maintain the backward-compatibility, if we rename the file, ISTM
that the tools which depends on recovery.conf would need to be changed so
that  is used instead of recovery.conf. So I'm not sure
whether leaving the capability to set parameters in recovery.conf as it is is
really worth supporting or not. If we would like to make our effort for the
backward-compatibility more worthwhile, we might have to make the path of
the file configurable as a user can set it to "recovery.conf".

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] collation, arrays, and ranges

2011-09-12 Thread Jeff Davis
My interpretation of collation for range types is different than that
for arrays, so I'm presenting it here in case someone has an objection.

An array type has the same typcollation as its element type. This makes
sense, because comparison between arrays are affected by the COLLATE
clause.

Comparison between ranges should not be affected by the COLLATE clause
(as we discussed). So, I chose to represent that as a separate
rngcollation and leave the typcollation 0. In other words, collation is
a concept internal to that range type and fixed at type definition time.
Range types are affected by their internal collation, but don't take
part in the logic that passes collation through the type system.

Comments?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cheaper snapshots redux

2011-09-12 Thread Amit kapila
>> 4. Won't it effect if we don't update xmin everytime and just noting the 
>> committed XIDs. The reason I am asking is that it is used in tuple 
>> visibility check so with new idea in some cases instead of just returning >> 
>> from begining by checking xmin it has to go through the committed XID 
>> list.

>> I understand that there may be less cases or the improvement by your idea 
>> can supesede this minimal effect. However some cases can be defeated.

>The snapshot xmin has to be up to date.  I'm not planning to break that 
>because it would be wrong.

   In the approach mentioned in your idea, it mentioned that once after taking 
snapshot, only committed XIDs will be updated and sometimes snapshot itself.

   So when the xmin will be updated according to your idea.

>RecentGlobalXmin doesn't need to be completely up to date, and in fact 
>recomputing it on every snapshot becomes prohibitively expensive with this 
>approach.  I'm still struggling with the best way to handle that.

   RecentGlobalXmin and RecentXmin are mostly updated with snapshots xmin and 
that too outside ProcArrayLock, so why it will be expensive if you have updated 
xmin.


With Regards,

Amit Kapila.

***
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!


Re: [HACKERS] What Would You Like To Do?

2011-09-12 Thread David E. Wheeler
On Sep 12, 2011, at 9:41 PM, Magnus Hagander wrote:

> > I'm not looking for funding (probably couldn't take it if I was offered
> > it, heh), so I'm not sure if it should be included, but I'm still
> > planning to dig into revamping the logging system (if I can ever manage
> > to get out from under my current 'real job' workload :/).  If others are
> > interested and have time to help, please let me know..
> 
> Definitely interested in that, yes. We probably have some overlap in our 
> thoughts and plans, as discussed at the developer meeting in Ottawa.
> 
> Not specifically looking for funding either, but it would certainly increase 
> the number of hours available to work on it and as such make it happen sooner…

Yeah, that's the point.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sponsored development

2011-09-12 Thread Heikki Linnakangas

On 13.09.2011 00:33, Dermot wrote:

I have seen this feature on the todo list:
http://wiki.postgresql.org/wiki/Todo#Referential_Integrity

It's my understanding that this will allow FK constraints on array
elements, if I'm wrong, please stop me now


You're right.

I don't want to discourage further development in the area, of course, 
but I think you can already accomplish that with exclusion constraints. 
And starting with 9.1, regular triggers can be used too, if you use 
serializable snapshot isolation.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-12 Thread Fujii Masao
On Fri, Sep 9, 2011 at 8:27 PM, Magnus Hagander  wrote:
 If the same parameter is specified in both file, the setting in 
 recovery.conf
 overrides that in postgresql.conf. In this case, SHOW command displays
 the settings in postgresql.conf even though they are not used at all. Even 
 if
>>>
>>> Now, *that* is just broken, IMHO. The SHOW command should show what is
>>> actually in use, period. Everything is guaranteed to confuse the hell
>>> out of anybody trying to use it.
>>
>> I'm afraid that we have to add very complicated code to fix that problem.
>> Though of course we can easily fix the problem if we don't care about
>> the backward compatibility.
>
> That is an even bigger reason to drop backwards compatibility. Unless
> someone else can come up with a neat way of fixing it.

Agreed. After a little thought, it's came to mind that we might be
able to simply
fix that problem by making the postmaster instead of the startup process read
recovery.conf; The postmaster reads recovery.conf before it invokes the startup
process, and sets the global variables of GUC to the specified values. The child
processes including the startup process inherit the global variables of GUC, so
SHOW can return the exact setting value and the startup process can use them
to do a recovery. Even in EXEC_BACKEND case, we can adopt this idea because
those global variables are inherited from parent to children via the
save/restore_backend_variables functions.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-12 Thread Magnus Hagander
On Sep 13, 2011 2:37 AM, "Stephen Frost"  wrote:
>
> * David E. Wheeler (da...@kineticode.com) wrote:
> > Toward the end of the presentation, I'd like to make some suggestions
and offer to do some match-making. I'm thinking primarily of listing some of
the stuff the community would love to see done, along with the names of the
folks and/or companies who, with funding, might make it happen. My question
for you is: What do you want to work on?
>
> I'm not looking for funding (probably couldn't take it if I was offered
> it, heh), so I'm not sure if it should be included, but I'm still
> planning to dig into revamping the logging system (if I can ever manage
> to get out from under my current 'real job' workload :/).  If others are
> interested and have time to help, please let me know..

Definitely interested in that, yes. We probably have some overlap in our
thoughts and plans, as discussed at the developer meeting in Ottawa.

Not specifically looking for funding either, but it would certainly increase
the number of hours available to work on it and as such make it happen
sooner...

/Magnus


Re: [HACKERS] xlog file naming

2011-09-12 Thread Fujii Masao
On Tue, Sep 13, 2011 at 12:18 PM, Jaime Casanova  wrote:
>> and extending
>> pg_xlogfile_name() so that it accepts not only LSN but also the timeline?
>> This idea enables us to get the backup start WAL file name by executing
>> "SELECT pg_xlogfile_name(pg_current_timeline(), pg_start_backup());".
>
> we can do "SELECT pg_xlogfile_name(pg_start_backup())" now, so how is
> that different?

Yeah, no difference. My idea is obviously useless for pg_start_backup()..

What I had in mind was to use the extended pg_xlogfile_name() to calculate
the WAL filename from (for example) pg_controldata executed on the standby.
The timeline which pg_controldata displays might be different from the
current one in the master. So pg_xlogfile_name() which always use the current
timeline might return a wrong WAL filename. Allowing pg_xlogfile_name() to
use the specified timeline to calculate a filename would address that problem.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] xlog file naming

2011-09-12 Thread Jaime Casanova
On Mon, Sep 12, 2011 at 8:44 PM, Fujii Masao  wrote:
> On Tue, Sep 13, 2011 at 3:36 AM, Peter Eisentraut  wrote:
>> The return value of pg_start_backup that currently looks something like
>>
>>  pg_start_backup
>> -
>>  8D1/C9013758
>>
>> should really be
>>
>>  08D1C9013758
>>
>> (perhaps the timeline should be included?)
>
> This change might break some tools which depends on such a result format.
> Instead of that, what about introducing something like pg_current_timeline()
> function which literally returns the current timeline ID,

until here i found it could have some use

> and extending
> pg_xlogfile_name() so that it accepts not only LSN but also the timeline?
> This idea enables us to get the backup start WAL file name by executing
> "SELECT pg_xlogfile_name(pg_current_timeline(), pg_start_backup());".
>

we can do "SELECT pg_xlogfile_name(pg_start_backup())" now, so how is
that different?

> Furthermore, in the same way, we can get the backup end WAL file name or
> current WAL file name from pg_stop_backup() or pg_current_xlog_location(),
> respectively.
>

we can do that already too

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] show pool_status like...

2011-09-12 Thread Tatsuo Ishii
> Tatsuo Ishii  writes:
>> While I took a look at MySQL manual (mainly for checking query cache
>> functionality), I noticed that MySQL has following syntx:
> 
>> show  like 'foo%';
> 
>> I think this is usefull for "show pool_status" command since it has
>> lengthy output now.
> 
>> Usage of show pool_status and like will be something like:
> 
>> show pool_status like 'client_idle%' will only show the status of
>> client_idle_limit and client_idle_limit_in_recovery.
> 
>> Comments?
> 
> I think you need to rethink how you've got pool_status defined.
> The above kluge isn't going to play nicely at all with the pg_settings
> view, for example.

Sorry, I accidently posted to pgsql-hackers, not pgpool-hackers. This
proposal was not for PostgreSQL and please forget this.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #6189: libpq: sslmode=require verifies server certificate if root.crt is present

2011-09-12 Thread Robert Haas
On Mon, Sep 12, 2011 at 2:20 PM, David Fetter  wrote:
> Well, "too much checking," classically, is a source of denial of
> service attacks.  It's not a super likely source, but it's a source,
> and it'd be better to fix it than leave it lie. :)

You forgot to attach the patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] show pool_status like...

2011-09-12 Thread Tom Lane
Tatsuo Ishii  writes:
> While I took a look at MySQL manual (mainly for checking query cache
> functionality), I noticed that MySQL has following syntx:

> show  like 'foo%';

> I think this is usefull for "show pool_status" command since it has
> lengthy output now.

> Usage of show pool_status and like will be something like:

> show pool_status like 'client_idle%' will only show the status of
> client_idle_limit and client_idle_limit_in_recovery.

> Comments?

I think you need to rethink how you've got pool_status defined.
The above kluge isn't going to play nicely at all with the pg_settings
view, for example.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Robert Haas
On Mon, Sep 12, 2011 at 8:39 PM, Bruce Momjian  wrote:
> Peter Eisentraut wrote:
>> On m?n, 2011-09-12 at 09:43 -0700, Josh Berkus wrote:
>> > > Writing the release notes is really the main part of the work.  Bundling
>> > > the release takes 15 minutes, writing the announcement takes 15 minutes
>> > > (copy and paste), writing the release notes takes about 2 days.
>> >
>> > Yeah, but this shaved a lot of effort/delay off doing the final release
>> > notes.
>>
>> It did?  AFAICT, the final release notes were created from scratch and
>> the alpha release notes deleted.
>
> Yes, that is what happened.  I did the 9.1 release notes from scratch,
> and Robert Haas looked over the alpha notes and mine and found mine more
> complete.  He did move some wording from the alpha releases into the
> final release notes.  I think Robert has the best perspective on this
> issue.

I don't have much of an opinion on this, honestly.  I think that
whoever did the alpha release notes tried to hit the highlights,
whereas Bruce went for something more in-depth.  You could make an
argument for either approach.

I think if the alpha release notes were done with a clear idea in mind
of producing something like what Bruce turned out, it wouldn't be
necessary for Bruce to do it over again.  The problem is that once you
start leaving things out, it's very difficult to figure out exactly
what got left out without redoing the whole process ab initio.  On the
flip side, I cross-referenced the alpha release notes with Bruce's,
and found a few things that Bruce had mysteriously omitted or to which
he had given short shrift.  So there is potentially at least a little
bit of value in doing the process twice - it helps you catch things
that may have gotten dropped.

Having done some work on this, I do NOT believe the previously-offered
contention that this work can't be done incrementally.  I think it
could.  After each CF, Bruce, or someone else, could go through all
the commits and produce a list of items.  As the release wore on, it
might be necessary to subdivide some of the categories or recategorize
things, but that I don't think it would be unmanageable.  The whole
process seems reasonably straightforward, just somewhat
time-consuming.  The main challenge seems to be making sure you don't
lose things.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] xlog file naming

2011-09-12 Thread Fujii Masao
On Tue, Sep 13, 2011 at 3:36 AM, Peter Eisentraut  wrote:
> The return value of pg_start_backup that currently looks something like
>
>  pg_start_backup
> -
>  8D1/C9013758
>
> should really be
>
>  08D1C9013758
>
> (perhaps the timeline should be included?)

This change might break some tools which depends on such a result format.
Instead of that, what about introducing something like pg_current_timeline()
function which literally returns the current timeline ID, and extending
pg_xlogfile_name() so that it accepts not only LSN but also the timeline?
This idea enables us to get the backup start WAL file name by executing
"SELECT pg_xlogfile_name(pg_current_timeline(), pg_start_backup());".

Furthermore, in the same way, we can get the backup end WAL file name or
current WAL file name from pg_stop_backup() or pg_current_xlog_location(),
respectively.

> and similarly, log output and pg_controldata output like
>
> Latest checkpoint location:           8D3/5A1BB578
>
> should be
>
> Latest checkpoint location:           08D35A1BB578

LSN information is useful for at least debug purpose. So, what about leaving
LSN information as it is and appending the filename in the end-of-line
as follows?
backup_label represents the backup start location in this way.

Latest checkpoint location: 0/220 (file 0002)

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] show pool_status like...

2011-09-12 Thread Tatsuo Ishii
Hi,

While I took a look at MySQL manual (mainly for checking query cache
functionality), I noticed that MySQL has following syntx:

show  like 'foo%';

I think this is usefull for "show pool_status" command since it has
lengthy output now.

Usage of show pool_status and like will be something like:

show pool_status like 'client_idle%' will only show the status of
client_idle_limit and client_idle_limit_in_recovery.

Comments?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Bruce Momjian
Peter Eisentraut wrote:
> On m?n, 2011-09-12 at 09:43 -0700, Josh Berkus wrote:
> > > Writing the release notes is really the main part of the work.  Bundling
> > > the release takes 15 minutes, writing the announcement takes 15 minutes
> > > (copy and paste), writing the release notes takes about 2 days.
> > 
> > Yeah, but this shaved a lot of effort/delay off doing the final release
> > notes. 
> 
> It did?  AFAICT, the final release notes were created from scratch and
> the alpha release notes deleted.

Yes, that is what happened.  I did the 9.1 release notes from scratch,
and Robert Haas looked over the alpha notes and mine and found mine more
complete.  He did move some wording from the alpha releases into the
final release notes.  I think Robert has the best perspective on this
issue.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-12 Thread Stephen Frost
* David E. Wheeler (da...@kineticode.com) wrote:
> Toward the end of the presentation, I'd like to make some suggestions and 
> offer to do some match-making. I'm thinking primarily of listing some of the 
> stuff the community would love to see done, along with the names of the folks 
> and/or companies who, with funding, might make it happen. My question for you 
> is: What do you want to work on?

I'm not looking for funding (probably couldn't take it if I was offered
it, heh), so I'm not sure if it should be included, but I'm still
planning to dig into revamping the logging system (if I can ever manage
to get out from under my current 'real job' workload :/).  If others are
interested and have time to help, please let me know..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] EXPLAIN and nfiltered, take two

2011-09-12 Thread Marko Tiikkaja

On 12/09/2011 12:12, I wrote:

On 2011-09-10 19:50, Marti Raudsepp wrote:

I tried this patch and noticed something weird. This is probably not
intentional:


Indeed, it is not intentional.  Will see how I can fix this.


The attached patch is the best I could come up with.  I considered 
showing "Rows Removed by Foo: (never executed)" and omitting the line 
altogether, but I didn't particularly like either of those options.  The 
current patch simply displays "Rows Removed by Foo: 0".


I also added a comment the last patch was missing.


--
Marko Tiikkajahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 6408d16..1f42f46 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -64,9 +64,15 @@ static void show_qual(List *qual, const char *qlabel,
 static void show_scan_qual(List *qual, const char *qlabel,
   PlanState *planstate, List *ancestors,
   ExplainState *es);
+static void show_instrumented_scan_qual(List *qual, const char *qlabel,
+   PlanState *planstate, double nremoved,
+   List *ancestors, ExplainState *es);
 static void show_upper_qual(List *qual, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es);
+static void show_instrumented_upper_qual(List *qual, const char *qlabel,
+   PlanState *planstate, double nremoved,
+   List *ancestors, ExplainState *es);
 static void show_sort_keys(SortState *sortstate, List *ancestors,
   ExplainState *es);
 static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
@@ -1002,29 +1008,37 @@ ExplainNode(PlanState *planstate, List *ancestors,
   "Index Cond", planstate, 
ancestors, es);
show_scan_qual(((IndexScan *) plan)->indexorderbyorig,
   "Order By", planstate, 
ancestors, es);
-   show_scan_qual(plan->qual, "Filter", planstate, 
ancestors, es);
+   show_instrumented_scan_qual(plan->qual, "Filter", 
planstate,
+   
((ScanState *) planstate)->ss_qualnremoved,
+   
ancestors, es);
break;
case T_BitmapIndexScan:
show_scan_qual(((BitmapIndexScan *) 
plan)->indexqualorig,
   "Index Cond", planstate, 
ancestors, es);
break;
case T_BitmapHeapScan:
-   show_scan_qual(((BitmapHeapScan *) 
plan)->bitmapqualorig,
-  "Recheck Cond", planstate, 
ancestors, es);
+   show_instrumented_scan_qual(((BitmapHeapScan *) 
plan)->bitmapqualorig,
+   
"Recheck Cond", planstate,
+   
((BitmapHeapScanState *) planstate)->bqonremoved,
+   
ancestors, es);
/* FALL THRU */
case T_SeqScan:
case T_ValuesScan:
case T_CteScan:
case T_WorkTableScan:
case T_SubqueryScan:
-   show_scan_qual(plan->qual, "Filter", planstate, 
ancestors, es);
+   show_instrumented_scan_qual(plan->qual, "Filter", 
planstate,
+   
((ScanState *) planstate)->ss_qualnremoved,
+   
ancestors, es);
break;
case T_FunctionScan:
if (es->verbose)
show_expression(((FunctionScan *) 
plan)->funcexpr,
"Function 
Call", planstate, ancestors,
es->verbose, 
es);
-   show_scan_qual(plan->qual, "Filter", planstate, 
ancestors, es);
+   show_instrumented_scan_qual(plan->qual, "Filter", 
planstate,
+   
((ScanState *) planstate)->ss_qualnremoved,
+   
ancestors, es);
break;
  

Re: [HACKERS] xlog file naming

2011-09-12 Thread Tom Lane
Peter Eisentraut  writes:
> Isn't the naming of the xlog files slightly bogus?

No doubt, but by now there's enough replication-ish third-party code
that knows about them that I'm afraid changing these conventions would
be much more painful than it's worth.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] Caching constant stable expressions per execution

2011-09-12 Thread Tom Lane
Marti Raudsepp  writes:
> On Mon, Sep 12, 2011 at 00:22, Tom Lane  wrote:
>> type CacheExpr (that's just the first name that came to mind, maybe
>> somebody has a better idea)

> StableConstExpr? But we can leave the bikeshedding for later :)

Well, FWIW, I found that terminology entirely detestable.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Thinking about inventing MemoryContextSetParent

2011-09-12 Thread Tom Lane
Alvaro Herrera  writes:
> Excerpts from Tom Lane's message of sáb sep 10 19:03:23 -0300 2011:
>> I'm considering inventing a new mcxt.c primitive,
>> 
>> void MemoryContextSetParent(MemoryContext context, MemoryContext new_parent);
>> 
>> which would have the effect of delinking "context" from its current
>> parent context and attaching it as a child of the new specified parent.
>> (Any child contexts that it has would naturally follow along.)
>> Because of the way that mcxt.c handles parent/child links, there is no
>> palloc required and so the operation cannot fail.

> Interesting.  I wonder whether we could use this somehow to fix
> performance problems in certain subtransaction code paths that "reassign
> stuff to the parent"; instead of moving pointers or memory around,
> perhaps we could do something like this.  Not that I have actually
> looked into it.

Yeah, I think it would be worth looking for places where we are either
copying lots-o-stuff or else jumping through weird hoops to avoid doing
that.  I'm about halfway through rewriting the plancache and SPIPlan
stuff using this mechanism, and it seems to be coming out a whole lot
nicer --- it'll probably end up with less parse-tree-copying overall,
and much less risk of leaking memory when you hit an error partway
through constructing a cached plan.  (The existing SPI code gets a
completely failing grade on that aspect :-(.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Sponsored development

2011-09-12 Thread Dermot
Hi,

First off, I hope this approach is not breaking protocol.

I have seen this feature on the todo list:
http://wiki.postgresql.org/wiki/Todo#Referential_Integrity

It's my understanding that this will allow FK constraints on array
elements, if I'm wrong, please stop me now

If I've assumed correctly, the company I work for would like to offer
an incentive to progress the work. As we're a tax registered company,
we'd need a invoice or receipt. I do not know if there is an existing
mechanism for such as arrangement. We thought payment should be
released once the feature was available in a stable release. The
amount we'd like to offer is £500 (pounds sterling).

Again, I hope this email does not offend the etiquette of the list and
I'd like to keep the discussion on-list initially if possible.

Thank you,
Dermot.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] Caching constant stable expressions per execution

2011-09-12 Thread Marti Raudsepp
On Mon, Sep 12, 2011 at 00:22, Tom Lane  wrote:
> Well, people seem to think that this is worth pursuing, so here's a
> couple of thoughts about what needs to be done to get to something
> committable.

Thanks, that's exactly the kind of feedback I need.

> IMO this is no good because it means that every intermediate result
> computed within the cacheable expression will be leaked into
> per_query_memory.

Agreed, that's not ideal.

> type CacheExpr (that's just the first name that came to mind, maybe
> somebody has a better idea)

StableConstExpr? But we can leave the bikeshedding for later :)

> The planner would have to figure out where to inject
> CacheExpr nodes into expression trees --- ideally only the minimum
> number of nodes would be added.

Yeah, that occured to me, but seemed complicated at first, so I didn't
want to do it before having a confirmation from the list. However,
after looking at the expression tree walking code for a bit, it
doesn't seem that scary anymore.

> The other thing that is going to be an issue is that I'm fairly sure
> this breaks plpgsql's handling of simple expressions.

Oh, I would have never thought of that.

Regards,
Marti

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] augmenting MultiXacts to improve foreign keys

2011-09-12 Thread Joshua D. Drake



Anyone on all of this?

On 09/09/2011 02:31 PM, Alvaro Herrera wrote:



Excerpts from Alvaro Herrera's message of mar ago 09 13:01:04 -0400 2011:


To implement this, we need to augment MultiXact to store the lock type
that each comprising Xid holds on the tuple.  Two bits per Xid are
needed.  My thinking is that we could simply extend the "members" to add
a byte for every four members.


So I've been working on this, and I've noticed that somehow it seems to
have turned into a giant snowball.  I'd like opinions on the items below
before I continue work here; if any of these ideas turns out to be a
showstopper, I'd like to know sooner rather than later.

1. since MultiXacts are going to contain updates and not just locks, it
means they will need to persist beyond OldestXmin -- in fact,
pg_multixact is going to have the same truncation rules as pg_clog,
namely the vacuum freeze horizon.  Currently they are truncated very
quickly; this is not going to be true anymore.

2. This also means that we may have to resolve MultiXacts to their
comprising TransactionIds when tqual.c is doing visibility checks on the
tuples.  Right now, the code simply does things like this:

if (tuple->t_infomask&  HEAP_XMAX_IS_MULTI)
{
/* MultiXacts are currently only allowed to lock tuples */
Assert(tuple->t_infomask&  HEAP_IS_LOCKED);
return true;
}
/* further code checks HeapTupleHeaderGetXmax(tuple) */

It's now going to need to do something more like

if (tuple->t_infomask&  HEAP_XMAX_IS_MULTI)
{
if (tuple->t_infomask&  HEAP_IS_LOCKED)
return true;
xmax = HeapTupleGetUpdateXid(tuple);
}
else
xmax = HeapTupleHeaderGetXmax(tuple);
/* further code checks our derived xmax */

where the HeapTupleGetUpdateXid function looks more or less like this

TransactionId
HeapTupleGetUpdateXid(HeapTupleHeader tuple)
{
TransactionId   update_xact;

Assert(!(tuple->t_infomask&  HEAP_XMAX_IS_NOT_UPDATE));
Assert(tuple->t_infomask&  HEAP_XMAX_IS_MULTI);

MultiXactMember *members;

nmembers = GetMultiXactIdMembers(xwait,&members);

if (nmembers>  0)
{
int i;

for (i = 0; i<  nmembers; i++)
{
/* KEY SHARE lockers are okay -- ignore it */
if (members[i].status == MultiXactStatusKeyShare)
continue;
/* there should be at most one updater */
Assert(update_xact == InvalidTransactionId);
/* other status values not acceptable because they
 * conflict with update */
Assert(members[i].status == MultiXactStatusUpdate);
update_xact = members[i].xid;
}
}

return update_xact;
}


Which leads us to:

3. I've come up with HEAP_XMAX_IS_NOT_UPDATE in t_infomask, which means
that the Xmax, being a MultiXact, does not contain an update Xid.  This
reuses the old value of HEAP_XMAX_SHARED_LOCK.  I've used this rather
weird semantics for these reasons:

a) it's pg_upgrade compatible.  Any tuple that has the SHARED_LOCK bit
from the older version set cannot contain an update, and so the bit is
automatically right.
b) it quickly avoids having to run the GetMultiXactIdMembers thingy
(which is expensive) in the common case that there's no update.
c) it lets me keep the HEAP_IS_LOCKED semantics; which means "this tuple
is only locked by Xmax, not updated", which is used extensively in
various places.
/*
  * if any of these bits is set, xmax hasn't deleted the tuple, only locked it.
  */
#define HEAP_IS_LOCKED  (HEAP_XMAX_EXCL_LOCK | HEAP_XMAX_IS_NOT_UPDATE | \
 HEAP_XMAX_KEYSHR_LOCK)


4. FOR SHARE is not a very widely used clause, I think.  FOR UPDATE is
part of the standard and thus it warrants quick innards, i.e. its own
hint bit.  FOR KEY SHARE is going to be used by RI triggers and so it
should also be quick; I've also given it its own hint bit.  However,
FOR SHARE is probably not used much and I've relegated it to being
mandatorily stored in MultiXact.  That is, if someone requests a FOR
SHARE lock on a tuple, it will get a singleton MultiXact.  The reason
for this is that I didn't want to use one more hint bit.

5. I've now used three hint bits -- reused HEAP_XMAX_SHARED_LOCK as
HEAP_XMAX_IS_NOT_UPDATE (already explained); used the free hint bit from
t_infomask as HEAP_XMAX_KEYSHR_LOCK (should be obvious); and I've used
0x2000 in t_infomask2 as HEAP_UPDATE_KEY_INTACT, to mean that this tuple
has been updated but the key columns have not been modified.  This lets
heapam.c know that this tuple can be further key-share locked.

6. When locking a tuple that is being conc

Re: [HACKERS] REVIEW proposal: a validator for configuration files

2011-09-12 Thread Alexey Klyukin


On Sep 12, 2011, at 10:24 PM, Peter Eisentraut wrote:

> On ons, 2011-09-07 at 10:00 -0400, Tom Lane wrote:
>> There has however
>> been some debate about the exact extent of ignoring bad values during
>> reload --- currently the theory is "ignore the whole file if anything is
>> wrong", but there's some support for applying all non-bad values as long
>> as the overall file syntax is okay. 
> 
> That could be a problem if you have some values that depend on each
> other, and then you change both of them, but because of an error only
> one gets applied.  I think ACID(-like) changes is a feature, also on
> this level.
> 

I think exactly this argument has already been discussed earlier in this thread:
http://archives.postgresql.org/message-id/21310d95-eb8d-4b15-a8bc-0f05505c6...@phlo.org

--
Alexey Klyukinhttp://www.commandprompt.com
The PostgreSQL Company – Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cheaper snapshots redux

2011-09-12 Thread Amit Kapila
>> 4. Won't it effect if we don't update xmin everytime and just noting the
committed XIDs. The reason I am asking is that it is used in tuple
visibility check so with new idea in some cases instead of just returning >>
from begining by checking xmin it has to go through the committed XID list. 

>> I understand that there may be less cases or the improvement by your idea
can supesede this minimal effect. However some cases can be defeated.

>The snapshot xmin has to be up to date.  I'm not planning to break that
because it would be wrong.

   In the approach mentioned in your idea, it mentioned that once after
taking snapshot, only committed XIDs will be updated and sometimes snapshot
itself.

   So when the xmin will be updated according to your idea as snapshot will
not be updated everytime so xmin cannot be latest always.

>RecentGlobalXmin doesn't need to be completely up to date, and in fact
recomputing it on every snapshot becomes prohibitively expensive with this
approach.  I'm still struggling with the best way to handle that.

   RecentGlobalXmin and RecentXmin are mostly updated with snapshots xmin
and that too outside ProcArrayLock, so why it will be expensive if you have
updated xmin.



With Regards,

Amit Kapila.



***
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com] 
Sent: Thursday, September 08, 2011 7:50 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux

On Tue, Sep 6, 2011 at 11:06 PM, Amit Kapila  wrote:
> 1. With the above, you want to reduce/remove the concurrency issue between
> the GetSnapshotData() [used at begining of sql command execution] and
> ProcArrayEndTransaction() [used at end transaction]. The concurrency issue
> is mainly ProcArrayLock which is taken by GetSnapshotData() in Shared mode
> and by ProcArrayEndTransaction() in X mode.
> There may be other instances for similar thing, but this the main thing
> which you want to resolve.

Yep.

> 2. You want to resolve it by using ring buffer such that readers don't
need
> to take any lock.

Yep.  Actually, they're still going to need some spinlocks at least in
the first go round, to protect the pointers.  I'm hoping those can
eventually be eliminated on machines with 8-byte atomic reads using
appropriate memory barrier primitives.

> 1. 2 Writers; Won't 2 different sessions who try to commit at same time
will
> get the same write pointer.
>    I assume it will be protected as even indicated in one of your replies
> as I understood?

Yes, commits have to be serialized.  No way around that.  The best
we'll ever be able to do is shorten the critical section.

> 2. 1 Reader, 1 Writter; It might be case that some body has written a new
> snapshot and advanced the stop pointer and at that point of time one
reader
> came and read between start pointer and stop pointer. Now the reader will
> see as follows:
>   snapshot, few XIDs, snapshot
>
>    So will it handle this situation such that it will only read latest
> snapshot?

In my prototype implementation that can't happen because the start and
stop pointers are protected by a single spinlock and are moved
simultaneously.  But I think we can get rid on machines with 8-byte
atomic writes of that and just move the stop pointer first and then
the start pointer.  If you get more than one snapshot in the middle
you just ignore the first part of the data you read and start with the
beginning of the last snapshot.

> 3. How will you detect overwrite.

If the write pointer is greater than the start pointer by more than
the ring size, you've wrapped.

> 4. Won't it effect if we don't update xmin everytime and just noting the
> committed XIDs. The reason I am asking is that it is used in tuple
> visibility check
>    so with new idea in some cases instead of just returning from begining
> by checking xmin it has to go through the committed XID list.
>    I understand that there may be less cases or the improvement by your
> idea can supesede this minimal effect. However some cases can be defeated.

The snapshot xmin has to be up to date.  I'm not planning to break
that because it would be wrong.

RecentGlobalXmin doesn't need to be completely up to date, and in fact
recomputing it on every snapshot becomes prohibitively expensive with
this approach.  I'm still struggling with the best way to handle that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enter

Re: [HACKERS] cheaper snapshots redux

2011-09-12 Thread Amit Kapila
>If you know what transactions were running the last time a snapshot summary
was written and what >transactions have ended since then, you can work out
the new xmin on the fly.  I have working >code for this and it's actually
quite simple.

 

I believe one method to do same is as follows:

Let us assume at some point of time the snapshot and completed XID list is
somewhat as follows:

Snapshot

{

Xmin - 5

Xip[] - 8 10 12

Xmax  - 15

}

Committed XIDS - 8, 10 , 12, 18, 20, 21

So it means 16,17,19 are running transactions. So it will behave as follows:

Xmin - 16

Xmax - 21

Xip[] - 17,19

 

But if we do above way to calculate Xmin, we need to check in existing Xip
array and committed Xid array to find Xmin. Won't this cause reasonable time
even though it is outside lock time if Xip and Xid are large.

 

> Because GetSnapshotData() computes a new value for RecentGlobalXmin by
scanning the ProcArray.  > This isn't costing a whole lot extra right now
because the xmin and xid fields are normally in > the same cache line, so
once you've looked at one of them it doesn't cost that much extra to 

> look at the other.  If, on the other hand, you're not looking at (or even
locking) the 

> ProcArray, then doing so just to recomputed RecentGlobalXmin sucks.

 

Yes, this is more time as compare to earlier, but if our approach to
calculate Xmin is like above point, then one extra read outside lock should
not matter. However if for above point approach is different then it will be
costlier.

 


***

This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

 

 

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com] 
Sent: Monday, September 12, 2011 7:39 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux

 

On Sun, Sep 11, 2011 at 11:08 PM, Amit Kapila 
wrote:

>   In the approach mentioned in your idea, it mentioned that once after

> taking snapshot, only committed XIDs will be updated and sometimes
snapshot

> itself.

> 

>   So when the xmin will be updated according to your idea as snapshot will

> not be updated everytime so xmin cannot be latest always.

 

If you know what transactions were running the last time a snapshot

summary was written and what transactions have ended since then, you

can work out the new xmin on the fly.  I have working code for this

and it's actually quite simple.

 

>>RecentGlobalXmin doesn't need to be completely up to date, and in fact

> recomputing it on every snapshot becomes prohibitively expensive with this

> approach.  I'm still struggling with the best way to handle that.

> 

>   RecentGlobalXmin and RecentXmin are mostly updated with snapshots xmin

> and that too outside ProcArrayLock, so why it will be expensive if you
have

> updated xmin.

 

Because GetSnapshotData() computes a new value for RecentGlobalXmin by

scanning the ProcArray.  This isn't costing a whole lot extra right

now because the xmin and xid fields are normally in the same cache

line, so once you've looked at one of them it doesn't cost that much

extra to look at the other.  If, on the other hand, you're not looking

at (or even locking) the ProcArray, then doing so just to recompute

RecentGlobalXmin sucks.

 

-- 

Robert Haas

EnterpriseDB: http://www.enterprisedb.com

The Enterprise PostgreSQL Company



Re: [HACKERS] xlog file naming

2011-09-12 Thread Heikki Linnakangas

On 12.09.2011 21:36, Peter Eisentraut wrote:

PS2: While we're discussing the cleanup of xlog.c, someone daring could
replace XLogRecPtr by a plain uint64 and possibly save hundres of lines
of code and eliminate a lot of the above confusion.


One problem with that is that it would break binary 
backwards-compatibility on little-endian systems.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW proposal: a validator for configuration files

2011-09-12 Thread Peter Eisentraut
On ons, 2011-09-07 at 10:00 -0400, Tom Lane wrote:
> There has however
> been some debate about the exact extent of ignoring bad values during
> reload --- currently the theory is "ignore the whole file if anything is
> wrong", but there's some support for applying all non-bad values as long
> as the overall file syntax is okay. 

That could be a problem if you have some values that depend on each
other, and then you change both of them, but because of an error only
one gets applied.  I think ACID(-like) changes is a feature, also on
this level.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Remove "fmgr.h" include in cube contrib --- caused crash on a Ge

2011-09-12 Thread Alvaro Herrera
Excerpts from Alvaro Herrera's message of vie sep 02 13:53:12 -0300 2011:

> I wonder if there should be a new header, something like
> walsender_internal.h, for stuff like WalSnd and WalSndCtlData struct
> defs.

... as in the attached patch.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


0001-Split-walsender.h-taking-out-private-declarations.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] xlog file naming

2011-09-12 Thread Peter Eisentraut
Isn't the naming of the xlog files slightly bogus?

We have the following sequence:

000108D000FD
000108D000FE
000108D1

Ignoring that we skip FF for some obscure reason (*), these are
effectively supposed to be 64-bit numbers, chunked into 16MB pieces, so
shouldn't the naming be

000108D0FD00
000108D0FE00
000108D1

Then a lot of other things would also start making more sense:

The backup file

000108D100C9.00013758.backup

should really be

000108D1C9013758.backup

(At least conceptually.  It's debatable whether we'd want to change
that, since as it is, it's convenient to detect the preceding WAL file
name by cutting off the end.  OTOH, it's safer to actually look into the
backup file for that information.)

The return value of pg_start_backup that currently looks something like

 pg_start_backup 
-
 8D1/C9013758

should really be

 08D1C9013758

(perhaps the timeline should be included?)

and similarly, log output and pg_controldata output like

Latest checkpoint location:   8D3/5A1BB578

should be

Latest checkpoint location:   08D35A1BB578

Then all instances of xlog location would look the same.

Also, the documentation offers this example:

"For example, if the starting WAL file is 0001123455CD the
backup history file will be named something like
0001123455CD.007C9330.backup."

Both the WAL and the backup file names used here are actually
impossible, and are not helping to clarify the issue.

It seems to me that this is all uselessly complicated and confused.


(*) - That idea appears to come from the same aboriginal confusion about
WAL "files" vs "segments" vs WAL position.  Unless we support WAL
segments of size 1 or 2 bytes, there shouldn't be any risk of
overflowing the segment counter.


PS2: While we're discussing the cleanup of xlog.c, someone daring could
replace XLogRecPtr by a plain uint64 and possibly save hundres of lines
of code and eliminate a lot of the above confusion.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #6189: libpq: sslmode=require verifies server certificate if root.crt is present

2011-09-12 Thread David Fetter
On Mon, Sep 12, 2011 at 07:37:23PM +0200, Magnus Hagander wrote:
> On Mon, Sep 12, 2011 at 19:21, David Fetter  wrote:
> > On Wed, Aug 31, 2011 at 09:59:18AM +, Srinivas Aji wrote:
> >>
> >> The following bug has been logged online:
> >>
> >> Bug reference:      6189
> >> Logged by:          Srinivas Aji
> >> Email address:      srinivas@emc.com
> >> PostgreSQL version: 9.0.4
> >> Operating system:   Linux
> >> Description:        libpq: sslmode=require verifies server certificate if
> >> root.crt is present
> >> Details:
> >>
> >> >From the documentation of sslmode values in
> >> http://www.postgresql.org/docs/9.0/static/libpq-ssl.html ,
> >> it looks like libpq will not verify the server certificate when the option
> >> sslmode=require is used, and will perform different levels of certificate
> >> verification in the cases sslmode=verify-ca and sslmode=verify-full.
> >>
> >> The observed behaviour is a bit different. If the ~/.postgresql/root.crt
> >> file (or any other filename set through sslrootcert option) is found,
> >> sslmode=require also performs the same level of certificate verification as
> >> verify-ca. The difference between require and verify-ca is that it is an
> >> error for the file to not exist when sslmode is verify-ca.
> >>
> >> Thanks,
> >> Srinivas
> >
> > It looks to me like there could at least in theory be an attack vector
> > or two that we're not covering with this bug.  Anybody want to tackle
> > same?
> 
> I haven't checked the code yet, but from the report it sounds like
> we're checking *too much* - how could that be an attack vector?

Well, "too much checking," classically, is a source of denial of
service attacks.  It's not a super likely source, but it's a source,
and it'd be better to fix it than leave it lie. :)

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql additions

2011-09-12 Thread Peter Eisentraut
On mån, 2011-09-12 at 09:01 -0400, Andrew Dunstan wrote:
> 
> On 09/12/2011 08:39 AM, Peter Eisentraut wrote:
> > On mån, 2011-09-12 at 05:26 -0400, Andrew Dunstan wrote:
> >> On 09/11/2011 11:43 PM, Robert Haas wrote:
> >>>
>  Second, I'd like to be able to set a minimum number of lines below which 
>  the
>  pager would not be used, something like:
> 
>  \pset pagerminlines 200
> 
>  Thoughts?
> >>> Gee, why do I feel like we have something like this already?
> >> We do? We control columns, but not lines AFAIK.
> > We check the lines and columns in the terminal window.  Oddly enough,
> > that code is in two separate places, so I don't know if they have
> > consistent effects.
> >
> >
> 
> Yes, but what I want is to be able to control it. So even if my terminal 
> is 50 lines long I want to be able to say "Don't use the pager unless 
> there are 200 or more lines."

Right.  Actually, if you implemented the \setenv idea, you could set the
environment variable LINES to get that effect.  Except that you will
then also have to implement the part that looks at the LINES variable,
but we have code elsewhere that looks at COLUMNS, so that would only be
consistent.  Again, see comment above that the lines and columns
checking might be a bit inconsistent at the moment.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Peter Eisentraut
On mån, 2011-09-12 at 09:43 -0700, Josh Berkus wrote:
> > Writing the release notes is really the main part of the work.  Bundling
> > the release takes 15 minutes, writing the announcement takes 15 minutes
> > (copy and paste), writing the release notes takes about 2 days.
> 
> Yeah, but this shaved a lot of effort/delay off doing the final release
> notes. 

It did?  AFAICT, the final release notes were created from scratch and
the alpha release notes deleted.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #6189: libpq: sslmode=require verifies server certificate if root.crt is present

2011-09-12 Thread Magnus Hagander
On Mon, Sep 12, 2011 at 19:21, David Fetter  wrote:
> On Wed, Aug 31, 2011 at 09:59:18AM +, Srinivas Aji wrote:
>>
>> The following bug has been logged online:
>>
>> Bug reference:      6189
>> Logged by:          Srinivas Aji
>> Email address:      srinivas@emc.com
>> PostgreSQL version: 9.0.4
>> Operating system:   Linux
>> Description:        libpq: sslmode=require verifies server certificate if
>> root.crt is present
>> Details:
>>
>> >From the documentation of sslmode values in
>> http://www.postgresql.org/docs/9.0/static/libpq-ssl.html ,
>> it looks like libpq will not verify the server certificate when the option
>> sslmode=require is used, and will perform different levels of certificate
>> verification in the cases sslmode=verify-ca and sslmode=verify-full.
>>
>> The observed behaviour is a bit different. If the ~/.postgresql/root.crt
>> file (or any other filename set through sslrootcert option) is found,
>> sslmode=require also performs the same level of certificate verification as
>> verify-ca. The difference between require and verify-ca is that it is an
>> error for the file to not exist when sslmode is verify-ca.
>>
>> Thanks,
>> Srinivas
>
> It looks to me like there could at least in theory be an attack vector
> or two that we're not covering with this bug.  Anybody want to tackle
> same?

I haven't checked the code yet, but from the report it sounds like
we're checking *too much* - how could that be an attack vector?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-12 Thread David E . Wheeler
On Sep 12, 2011, at 6:01, Peter Eisentraut  wrote:

> Column-level collation support already exists.

Yeah, just realized that. I mention to say table or column-level encoding. 

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #6189: libpq: sslmode=require verifies server certificate if root.crt is present

2011-09-12 Thread David Fetter
On Wed, Aug 31, 2011 at 09:59:18AM +, Srinivas Aji wrote:
> 
> The following bug has been logged online:
> 
> Bug reference:  6189
> Logged by:  Srinivas Aji
> Email address:  srinivas@emc.com
> PostgreSQL version: 9.0.4
> Operating system:   Linux
> Description:libpq: sslmode=require verifies server certificate if
> root.crt is present
> Details: 
> 
> >From the documentation of sslmode values in
> http://www.postgresql.org/docs/9.0/static/libpq-ssl.html ,
> it looks like libpq will not verify the server certificate when the option
> sslmode=require is used, and will perform different levels of certificate
> verification in the cases sslmode=verify-ca and sslmode=verify-full.
> 
> The observed behaviour is a bit different. If the ~/.postgresql/root.crt
> file (or any other filename set through sslrootcert option) is found,
> sslmode=require also performs the same level of certificate verification as
> verify-ca. The difference between require and verify-ca is that it is an
> error for the file to not exist when sslmode is verify-ca.
> 
> Thanks,
> Srinivas

It looks to me like there could at least in theory be an attack vector
or two that we're not covering with this bug.  Anybody want to tackle
same?

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-12 Thread Dimitri Fontaine
Hi,

"David E. Wheeler"  writes:
> Thanks to Greg Smith for adding a few bonus ideas I hadn't thought of. What
> else have you got? I don't think we necessarily have to limit ourselves to
> core features, BTW: projects like PostGIS and pgAdmin are also clearly
> popular, and new projects of that scope (or improvements to those!) would no
> doubt be welcome.

You could add DDL Triggers from me (2ndQuadrant) and process-based
parallel loading in pgloader (currently thread based, sucks).

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq: Return of NULL from PQexec

2011-09-12 Thread Merlin Moncure
On Mon, Sep 12, 2011 at 10:40 AM, Mark Hills  wrote:
> The libpq documentation for PQexec states:
>
>  "If a null pointer is returned, it should be treated like a
>   PGRES_FATAL_ERROR result"
>
> But this contradicts the example programs; eg. from Example Program 1:
>
>   /* Start a transaction block */
>   res = PQexec(conn, "BEGIN");
>   if (PQresultStatus(res) != PGRES_COMMAND_OK)
>   {
>       fprintf(stderr, "BEGIN command failed: %s", PQerrorMessage(conn));
>       PQclear(res);
>       exit_nicely(conn);
>   }
>
> which does not check if (res != NULL).
>
> The example is not buggy -- PQresultStatus, PQerrorMessage and,
> importantly, PQclear deal with the NULL value; eg. src/libpq/fq-exec.c:
>
>  ExecStatusType
>  PQresultStatus(const PGresult *res)
>  {
>      if (!res)
>          return PGRES_FATAL_ERROR;
>      return res->resultStatus;
>  }
>
> So I took the example to be correct, and attempted to fix the
> documentation in the patch below. I hope this is useful.
>
> In a straw-poll of code I could find using libpq, I found most follows the
> example and is reliant on libpq for its NULL check.
>
> The same also applies to PQconnect functions -- and PQstatus, PQfinish,
> which I also tried to clarify in the documentation.
>
> Thanks
>
> --
> Mark
>
>
> diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
> index 163a893..57be7e1 100644
> --- a/doc/src/sgml/libpq.sgml
> +++ b/doc/src/sgml/libpq.sgml
> @@ -62,7 +62,7 @@
>    return a non-null object pointer, unless perhaps there is too
>    little memory even to allocate the PGconn object.
>    The PQstatus function should be called to check
> -   whether a connection was successfully made before queries are sent
> +   the return value for a successful connection before queries are sent
>    via the connection object.
>
>    
> @@ -1748,8 +1748,10 @@ PGresult *PQexec(PGconn *conn, const char *command);
>         Returns a PGresult pointer or possibly a null
>         pointer.  A non-null pointer will generally be returned except in
>         out-of-memory conditions or serious errors such as inability to send
> -        the command to the server.  If a null pointer is returned, it should
> -        be treated like a PGRES_FATAL_ERROR result.  Use
> +        the command to the server.  The PQresultStatus function
> +        should be called to check the return value for any errors (including
> +        the value of a null pointer, in which case it will return
> +        PGRES_FATAL_ERROR).  Use
>         PQerrorMessage to get more information about such
>         errors.
>        

yeah -- libpq's handling of errors and result state is (put
charitably) pretty clunky -- a modernized api would probably lean on
thread local storage for global error state and return sane errors in
OOM conditions.  libpq simply demands wrapping if you want a clean
api.  anyways, +1 on the patch and the rationale -- the idea is to not
manually check NULL but to try and rely on the API -- this removes
(stupid) logic from userland.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Josh Berkus

> Writing the release notes is really the main part of the work.  Bundling
> the release takes 15 minutes, writing the announcement takes 15 minutes
> (copy and paste), writing the release notes takes about 2 days.

Yeah, but this shaved a lot of effort/delay off doing the final release
notes.

Also, you could get more community help on the release notes if you
wikified them the way you did the first time.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Thinking about inventing MemoryContextSetParent

2011-09-12 Thread Alvaro Herrera

Excerpts from Tom Lane's message of sáb sep 10 19:03:23 -0300 2011:
> I'm considering inventing a new mcxt.c primitive,
> 
> void MemoryContextSetParent(MemoryContext context, MemoryContext new_parent);
> 
> which would have the effect of delinking "context" from its current
> parent context and attaching it as a child of the new specified parent.
> (Any child contexts that it has would naturally follow along.)
> Because of the way that mcxt.c handles parent/child links, there is no
> palloc required and so the operation cannot fail.

Interesting.  I wonder whether we could use this somehow to fix
performance problems in certain subtransaction code paths that "reassign
stuff to the parent"; instead of moving pointers or memory around,
perhaps we could do something like this.  Not that I have actually
looked into it.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] libpq: Return of NULL from PQexec

2011-09-12 Thread Mark Hills
The libpq documentation for PQexec states:

  "If a null pointer is returned, it should be treated like a 
   PGRES_FATAL_ERROR result"

But this contradicts the example programs; eg. from Example Program 1:

   /* Start a transaction block */
   res = PQexec(conn, "BEGIN");
   if (PQresultStatus(res) != PGRES_COMMAND_OK)
   {
   fprintf(stderr, "BEGIN command failed: %s", PQerrorMessage(conn));
   PQclear(res);
   exit_nicely(conn);
   }

which does not check if (res != NULL).

The example is not buggy -- PQresultStatus, PQerrorMessage and, 
importantly, PQclear deal with the NULL value; eg. src/libpq/fq-exec.c:

  ExecStatusType
  PQresultStatus(const PGresult *res)
  {
  if (!res)
  return PGRES_FATAL_ERROR;
  return res->resultStatus;
  }

So I took the example to be correct, and attempted to fix the 
documentation in the patch below. I hope this is useful.

In a straw-poll of code I could find using libpq, I found most follows the 
example and is reliant on libpq for its NULL check.

The same also applies to PQconnect functions -- and PQstatus, PQfinish, 
which I also tried to clarify in the documentation.

Thanks

-- 
Mark


diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 163a893..57be7e1 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -62,7 +62,7 @@
return a non-null object pointer, unless perhaps there is too
little memory even to allocate the PGconn object.
The PQstatus function should be called to check
-   whether a connection was successfully made before queries are sent
+   the return value for a successful connection before queries are sent
via the connection object.
 

@@ -1748,8 +1748,10 @@ PGresult *PQexec(PGconn *conn, const char *command);
 Returns a PGresult pointer or possibly a null
 pointer.  A non-null pointer will generally be returned except in
 out-of-memory conditions or serious errors such as inability to send
-the command to the server.  If a null pointer is returned, it should
-be treated like a PGRES_FATAL_ERROR result.  Use
+the command to the server.  The PQresultStatus function
+should be called to check the return value for any errors (including
+the value of a null pointer, in which case it will return
+PGRES_FATAL_ERROR).  Use
 PQerrorMessage to get more information about such
 errors.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cheaper snapshots redux

2011-09-12 Thread Robert Haas
On Mon, Sep 12, 2011 at 11:07 AM, Amit Kapila  wrote:
>>If you know what transactions were running the last time a snapshot summary
>> was written and what >transactions have ended since then, you can work out
>> the new xmin on the fly.  I have working >code for this and it's actually
>> quite simple.
>
> I believe one method to do same is as follows:
>
> Let us assume at some point of time the snapshot and completed XID list is
> somewhat as follows:
>
> Snapshot
>
> { Xmin – 5, Xip[] – 8 10 12, Xmax  - 15 }
>
> Committed XIDS – 8, 10 , 12, 18, 20, 21
>
> So it means 16,17,19 are running transactions. So it will behave as follows:
>
> { Xmin – 16, Xmax – 21, Xip[] – 17,19 }

Yep, that's pretty much what it does, although xmax is actually
defined as the XID *following* the last one that ended, and I think
xmin needs to also be in xip, so in this case you'd actually end up
with xmin = 15, xmax = 22, xip = { 15, 16, 17, 19 }.  But you've got
the basic idea of it.

> But if we do above way to calculate Xmin, we need to check in existing Xip
> array and committed Xid array to find Xmin. Won’t this cause reasonable time
> even though it is outside lock time if Xip and Xid are large.

Yes, Tom raised this concern earlier.  I can't answer it for sure
without benchmarking, but clearly xip[] can't be allowed to get too
big.

>> Because GetSnapshotData() computes a new value for RecentGlobalXmin by
>> scanning the ProcArray.  > This isn't costing a whole lot extra right now
>> because the xmin and xid fields are normally in > the same cache line, so
>> once you've looked at one of them it doesn't cost that much extra to
>> look at the other.  If, on the other hand, you're not looking at (or even
>> locking) the
>> ProcArray, then doing so just to recomputed RecentGlobalXmin sucks.
>
> Yes, this is more time as compare to earlier, but if our approach to
> calculate Xmin is like above point, then one extra read outside lock should
> not matter. However if for above point approach is different then it will be
> costlier.

It's not one extra read - you'd have to look at every PGPROC.  And it
is not outside a lock, either.  You definitely need locking around
computing RecentGlobalXmin; see src/backend/access/transa/README.  In
particular, if someone with proc->xmin = InvalidTransactionId is
taking a snapshot while you're computing RecentGlobalXmin, and then
stores a proc->xmin less than your newly-computed RecentGlobalXmin,
you've got a problem.  That can't happen right now because no
transactions can commit while RecentGlobalXmin is being computed, but
the point here is precisely to allow those operations to (mostly) run
in parallel.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER TABLE ONLY ...DROP CONSTRAINT is broken in HEAD.

2011-09-12 Thread Alexey Klyukin

This works in 9.1, but not in HEAD:

CREATE TABLE parent(id INTEGER, CONSTRAINT id_check CHECK(id>1));
CREATE TABLE child() INHERITS(parent);

ALTER TABLE ONLY parent DROP CONSTRAINT id_check;

I'm getting:
ERROR:  relation 16456 has non-inherited constraint "id_check"
where 16456 is the oid of the child table.

It seems that the pg_constraint scan at ATExecDropConstraint (tablecmds.c:6751) 
is re-reading those tuples that were updated in the previous iterations of this 
scan, at least that's what I've observed in gdb. I'm not sure how to fix this 
yet.

--
Alexey Klyukinhttp://www.commandprompt.com
The PostgreSQL Company – Command Prompt, Inc.





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Robert Haas
On Mon, Sep 12, 2011 at 10:30 AM, Peter Eisentraut  wrote:
> On mån, 2011-09-12 at 10:00 -0400, Robert Haas wrote:
>> I certainly think there is value in pushing an alpha release after
>> CF4, and maybe even after CF3.
>
> Yes, that makes sense.  Although I was surprised to see that the
> download numbers dropped off significantly for the later alphas.

IIUC, alpha4 got the most, I guess because that was the first one that
was alleged to be feature-complete.  alpha5 had the least, but that's
probably because it was just a bunch of bug fixes over alpha4, but not
enough to make the result beta-quality, thus less interesting.  Also,
I think that may have been the one we forgot to announce.

>> Whether or not it's worthwhile to do
>> them for earlier CFs I'm less certain about, but there seem to be
>> several people speaking up and saying that they like having alpha
>> releases, and if the hold-up here is just that we need someone to tag
>> and bundle, I'm certainly willing to sign on the dotted line for that
>> much.  We'd still need someone to write release notes, though,
>
> Writing the release notes is really the main part of the work.  Bundling
> the release takes 15 minutes, writing the announcement takes 15 minutes
> (copy and paste), writing the release notes takes about 2 days.

Yep.  So perhaps the question is whether anyone's willing to do that work.

>> probably someone to arrange for the minimal amount of necessary PR
>> work (announcements, etc.), and (somewhat optionally) packagers.
>
> We've tried that in the past, and haven't had much impact.

I think we at least need to announce the releases.  Packaging is
optional, but it's nice if people are willing to do it, and I would
assume most packagers have this fairly well automated.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-12 Thread Robert Haas
On Mon, Sep 12, 2011 at 9:39 AM, Florian Pflug  wrote:
> Really, it's not *that* hard to put a retry loop around "read" and "write".

+1.  I don't see what we're gaining by digging in our heels on this
one.  Retry loops around read() and write() are pretty routine, and I
wouldn't like to bet this is the only case where not having them could
cause an unnecessary failure.

Now, that having been said, I *really* think we could use some better
documentation on which mount options we believe to be safe, and not
just for NFS.  Right now we have practically nothing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] prepare plans of embedded sql on function start

2011-09-12 Thread Tom Lane
Pavel Stehule  writes:
> I started work on proposed check statement option and there are a few 
> questions?

> what is sense of this statement for others PL?

IMO you should design this as a call to the PL's validator function.
It's not necessary to make other PLs do anything more than their
existing validators do (at least for now).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Peter Eisentraut
On mån, 2011-09-12 at 10:00 -0400, Robert Haas wrote:
> I certainly think there is value in pushing an alpha release after
> CF4, and maybe even after CF3.

Yes, that makes sense.  Although I was surprised to see that the
download numbers dropped off significantly for the later alphas.

> Whether or not it's worthwhile to do
> them for earlier CFs I'm less certain about, but there seem to be
> several people speaking up and saying that they like having alpha
> releases, and if the hold-up here is just that we need someone to tag
> and bundle, I'm certainly willing to sign on the dotted line for that
> much.  We'd still need someone to write release notes, though,

Writing the release notes is really the main part of the work.  Bundling
the release takes 15 minutes, writing the announcement takes 15 minutes
(copy and paste), writing the release notes takes about 2 days.

> probably someone to arrange for the minimal amount of necessary PR
> work (announcements, etc.), and (somewhat optionally) packagers.

We've tried that in the past, and haven't had much impact.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cheaper snapshots redux

2011-09-12 Thread Robert Haas
On Sun, Sep 11, 2011 at 11:08 PM, Amit Kapila  wrote:
>   In the approach mentioned in your idea, it mentioned that once after
> taking snapshot, only committed XIDs will be updated and sometimes snapshot
> itself.
>
>   So when the xmin will be updated according to your idea as snapshot will
> not be updated everytime so xmin cannot be latest always.

If you know what transactions were running the last time a snapshot
summary was written and what transactions have ended since then, you
can work out the new xmin on the fly.  I have working code for this
and it's actually quite simple.

>>RecentGlobalXmin doesn't need to be completely up to date, and in fact
> recomputing it on every snapshot becomes prohibitively expensive with this
> approach.  I'm still struggling with the best way to handle that.
>
>   RecentGlobalXmin and RecentXmin are mostly updated with snapshots xmin
> and that too outside ProcArrayLock, so why it will be expensive if you have
> updated xmin.

Because GetSnapshotData() computes a new value for RecentGlobalXmin by
scanning the ProcArray.  This isn't costing a whole lot extra right
now because the xmin and xid fields are normally in the same cache
line, so once you've looked at one of them it doesn't cost that much
extra to look at the other.  If, on the other hand, you're not looking
at (or even locking) the ProcArray, then doing so just to recompute
RecentGlobalXmin sucks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Robert Haas
On Mon, Sep 12, 2011 at 5:23 AM, Dave Page  wrote:
> On Sat, Sep 10, 2011 at 9:47 PM, Joshua Berkus  wrote:
>>> Download numbers for the installers were bordering on noise compared
>>> to the GA builds last time I looked, double figures iirc. I don't
>>> know about the tarballs offhand and can't check ATM.
>>
>> Can you check when you get a chance?   I know that the DL numbers for the 
>> first alphas were very low, but I'm wondering about Alpha 3, 4 and 5.
>
> [ >1100 downloads for alphas1-3, >2000 downloads for alpha4, ~900 downloads 
> for alpha5 ]

Hmm, that seems pretty respectable, all things considered.

Honestly, I'm not sure how to feel about this.   As a practical
matter, I suspect that the value of alphas early in the release cycle
is limited.  Most of the big ticket features that people are going to
be interested in testing tend to arrive late in the release cycle.  If
you look at the 9.1 release notes, the first commit to implement any
portion of a feature that made the "major features" list for the
release was my commit to add SECURITY LABEL, which happened on
September 27, 2010.  As of the turn of the year, we had 2.5 of the 10
features that ultimately made that list in the tree.  IMHO, we should
be making a more concerted effort to get more of our major features
done and committed sooner, but since we aren't, testing of early
alphas seems likely to be a fairly unrewarding activity.  Stability
testing is likely going to be largely useless (because there will be
lots more code churn just before feature freeze), and feature testing
is going to be confined to the relatively limited amount of stuff that
gets done and committed early.

I certainly think there is value in pushing an alpha release after
CF4, and maybe even after CF3.  Whether or not it's worthwhile to do
them for earlier CFs I'm less certain about, but there seem to be
several people speaking up and saying that they like having alpha
releases, and if the hold-up here is just that we need someone to tag
and bundle, I'm certainly willing to sign on the dotted line for that
much.  We'd still need someone to write release notes, though,
probably someone to arrange for the minimal amount of necessary PR
work (announcements, etc.), and (somewhat optionally) packagers.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Josh Berkus
On 9/12/11 2:23 AM, Dave Page wrote:
> Note that these are only numbers from people who click through the
> flags pages on the website. We don't have numbers for people who
> download directly from the FTP site or a mirror.

I'd say that 1200 downloads of each alpha is pretty significant.  If
even 1/4 of those people actually do testing, then that's a lot more
than we had for 8.3.  It's also a heck of a lot more than I'd expect.

Sure, it's 5% of an update versions' downloads.  So what?  We don't
expect most people do to alpha testing.  But if *hundreds* of people are
doing alpha testing, we want them to keep doing it.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-12 Thread Florian Pflug
On Sep12, 2011, at 14:54 , k...@rice.edu wrote:
> Many, many, many other software packages expect I/O usage to be the same on
> an NFS volume and a local disk volume, including Oracle. Coding every 
> application,
> or more likely mis-coding, to handle this gives every application another 
> chance
> to get it wrong. If the OS does this, when it gets it right, all of the apps 
> get
> it right. I think you should be surprised when other software actually deals 
> with
> broken I/O semantics gracefully rather than concerned when one of a pantheon 
> of
> programs does not. My two cents.

I don't buy that. People seem to be perfectly able to code correct networking
applications (correct from a read/write API POV at least), yet those 
applications
need to deal with partial reads and writes too.

Really, it's not *that* hard to put a retry loop around "read" and "write".

Also, non-interruptible IO primitives are by no means "right". At best, they're
a compromise between complexity and functionality for I/O devices with rather
short (and bounded) communication timeouts - because in that case, processes are
only blocked un-interruptibly for a short while.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-12 Thread Florian Pflug
On Sep12, 2011, at 14:54 , Peter Eisentraut wrote:
> On mån, 2011-09-12 at 16:46 +1000, George Barnett wrote:
>> On 12/09/2011, at 3:59 PM, Florian Pflug wrote:
>>> Still, I agree with Noah and Kevin that we ought to deal more gracefully 
>>> with this, i.e. resubmit after a partial read() or write(). AFAICS there's 
>>> nothing to be gained by not doing that, and the increase in code complexity 
>>> should be negligible. If we do that, however, I believe we might as well 
>>> handle EINTR correctly, even if SA_RESTART should prevent us from ever 
>>> seeing that.
>> 
>> It does still concern me that pgsql did not deal with this as gracefully as 
>> other software.  I hope the list will consider a patch to resolve that.
> 
> We have signal handling configured so that system calls are not
> interrupted.  So there is ordinarily no reason to do anything more
> graceful.  The problem is that NFS is in this case not observing that
> setting.  It's debatable whether it's worth supporting that; just saying
> that the code is correct as it stands.

SA_RESTART doesn't protect against partial reads/writes due to signal delivery,
it only removes the need to check for EINTR. In other words, it retries until
at least one byte has been written, not until all bytes have been written.

The GNU LibC documentation has this to say on the subject

  "There is one situation where resumption never happens no matter which
   choice you make: when a data-transfer function such as read or write is
   interrupted by a signal after transferring part of the data. In this case,
   the function returns the number of bytes already transferred, indicating
   partial success."[1]

While it's true that reads and writes are by tradition non-interruptible, I
personally wouldn't bet that they'll stay that way forever. It all depends on
whether the timeouts involved in the communication with a disk are short enough
to mask the difference - once they get too long (or even infinite like in the
case of "hard" NFS mounts) you pay for non-interruptible primitives with
un-killable stuck processes. Since the current trend is to move storage further
away from processing, and to put non-deterministic networks like ethernet 
between
the two, I expect situations in which read/write primitives are interruptible
to increase, not decrease.

And BTW, the GNU LibC documentations doesn't seem to mention anything about
local reads and writes being non-interruptible. In fact, it even says

  "A signal can arrive and be handled while an I/O primitive such as open or 
read
   is waiting for an I/O device. If the signal handler returns, the system faces
   the question: what should happen next?"[1]

Had the GNU people faith in local read and writes being non-interruptible, 
they'd
probably have said "network device" or "remove device", not "I/O device".

best regards,
Florian Pflug

[1] 
http://www.gnu.org/s/hello/manual/libc/Interrupted-Primitives.html#Interrupted-Primitives
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-12 Thread Peter Eisentraut
On sön, 2011-09-11 at 21:21 -0700, David E. Wheeler wrote:
> * Column-leve collation support: Peter/Enterprise DB

Column-level collation support already exists.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql additions

2011-09-12 Thread Andrew Dunstan



On 09/12/2011 08:39 AM, Peter Eisentraut wrote:

On mån, 2011-09-12 at 05:26 -0400, Andrew Dunstan wrote:

On 09/11/2011 11:43 PM, Robert Haas wrote:



Second, I'd like to be able to set a minimum number of lines below which the
pager would not be used, something like:

\pset pagerminlines 200

Thoughts?

Gee, why do I feel like we have something like this already?

We do? We control columns, but not lines AFAIK.

We check the lines and columns in the terminal window.  Oddly enough,
that code is in two separate places, so I don't know if they have
consistent effects.




Yes, but what I want is to be able to control it. So even if my terminal 
is 50 lines long I want to be able to say "Don't use the pager unless 
there are 200 or more lines."


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-12 Thread k...@rice.edu
On Mon, Sep 12, 2011 at 04:46:53PM +1000, George Barnett wrote:
> On 12/09/2011, at 3:59 PM, Florian Pflug wrote:
> 
> > If you really meant to say "intr" there (and not "nointr") then that 
> > probably explains the partial writes.
> > 
> > Still, I agree with Noah and Kevin that we ought to deal more gracefully 
> > with this, i.e. resubmit after a partial read() or write(). AFAICS there's 
> > nothing to be gained by not doing that, and the increase in code complexity 
> > should be negligible. If we do that, however, I believe we might as well 
> > handle EINTR correctly, even if SA_RESTART should prevent us from ever 
> > seeing that.
> 
> 
> Hi Florian,
> 
> You are indeed correct.  Setting nointr also resolves my issue.  I could 
> swear I checked this, but obviously not.
> 
> It does still concern me that pgsql did not deal with this as gracefully as 
> other software.  I hope the list will consider a patch to resolve that.
> 
> Thanks in advance,
> 
> George

Hi George,

Many, many, many other software packages expect I/O usage to be the same on
an NFS volume and a local disk volume, including Oracle. Coding every 
application,
or more likely mis-coding, to handle this gives every application another chance
to get it wrong. If the OS does this, when it gets it right, all of the apps get
it right. I think you should be surprised when other software actually deals 
with
broken I/O semantics gracefully rather than concerned when one of a pantheon of
programs does not. My two cents.

Regards,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to improve reliability of postgresql on linux nfs

2011-09-12 Thread Peter Eisentraut
On mån, 2011-09-12 at 16:46 +1000, George Barnett wrote:
> On 12/09/2011, at 3:59 PM, Florian Pflug wrote:
> 
> > If you really meant to say "intr" there (and not "nointr") then that 
> > probably explains the partial writes.
> > 
> > Still, I agree with Noah and Kevin that we ought to deal more gracefully 
> > with this, i.e. resubmit after a partial read() or write(). AFAICS there's 
> > nothing to be gained by not doing that, and the increase in code complexity 
> > should be negligible. If we do that, however, I believe we might as well 
> > handle EINTR correctly, even if SA_RESTART should prevent us from ever 
> > seeing that.
> 
> 
> Hi Florian,
> 
> You are indeed correct.  Setting nointr also resolves my issue.  I could 
> swear I checked this, but obviously not.
> 
> It does still concern me that pgsql did not deal with this as gracefully as 
> other software.  I hope the list will consider a patch to resolve that.

We have signal handling configured so that system calls are not
interrupted.  So there is ordinarily no reason to do anything more
graceful.  The problem is that NFS is in this case not observing that
setting.  It's debatable whether it's worth supporting that; just saying
that the code is correct as it stands.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql additions

2011-09-12 Thread Peter Eisentraut
On mån, 2011-09-12 at 05:26 -0400, Andrew Dunstan wrote:
> 
> On 09/11/2011 11:43 PM, Robert Haas wrote:
> >
> >
> >> Second, I'd like to be able to set a minimum number of lines below which 
> >> the
> >> pager would not be used, something like:
> >>
> >>\pset pagerminlines 200
> >>
> >> Thoughts?
> > Gee, why do I feel like we have something like this already?
> 
> We do? We control columns, but not lines AFAIK.

We check the lines and columns in the terminal window.  Oddly enough,
that code is in two separate places, so I don't know if they have
consistent effects.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: Collecting statistics on CSV file data

2011-09-12 Thread Etsuro Fujita
Hi there,

To enable file_fdw to estimate costs of scanning a CSV file more
accurately, I would like to propose a new FDW callback routine,
AnalyzeForeignTable, which allows to ANALYZE command to collect
statistics on a foreign table, and a corresponding file_fdw function,
fileAnalyzeForeignTable. Attached is my WIP patch.

Here's a summary of the implementation:

void AnalyzeForeignTable (Relation relation, VacuumStmt *vacstmt, int
elevel);
This is a new FDW callback routine to collect statistics on a foreign
table and store the results in the pg_class and pg_statistic system
catalogs. This is called when ANALYZE command is executed. (ANALYZE
command should be executed because autovacuum does not analyze foreign
tables.)

static void fileAnalyzeForeignTable(Relation relation, VacuumStmt
*vacstmt, int elevel);
This new file_fdw function collects and stores the same statistics on
CSV file data as collected on a local table except for index related
statistics by executing the sequential scan on the CSV file and
acquiring sample rows using Vitter's algorithm. (It is time-consuming
for a large file.)

estimate_costs() (more precisely, clauselist_selectivity() in
estimate_costs()) estimates baserel->rows using the statistics stored in
the pg_statistic system catalog. If there are no statistics,
estimate_costs() estimates it using the default statistics as in
PostgreSQL 9.1.

I am able to demonstrate the effectiveness of this patch. The following
run is performed on a single core of a 3.00GHz Intel Xeon CPU with 8GB
of RAM. Configuration settings are default except for work_mem = 256MB.
We can see from this result that the optimiser selects a good plan when
the foreign tables have been analyzed.

I appreciate your comments and suggestions.

[sample csv file data]
postgres=# COPY (SELECT s.a, repeat('a', 100) FROM generate_series(1,
500) AS s(a)) TO '/home/pgsql/sample_csv_data1.csv' (FORMAT csv,
DELIMITER ',');
COPY 500
postgres=# COPY (SELECT (random()*1)::int, repeat('b', 100) FROM
generate_series(1, 500)) TO '/home/pgsql/sample_csv_data2.csv'
(FORMAT csv, DELIMITER ',');
COPY 500

[Unpatched]
postgres=# CREATE FOREIGN TABLE tab1 (aid INTEGER, msg text) SERVER
file_fs OPTIONS (filename '/home/pgsql/sample_csv_data1.csv', format
'csv', delimiter ',');
CREATE FOREIGN TABLE
postgres=# CREATE FOREIGN TABLE tab2 (aid INTEGER, msg text) SERVER
file_fs OPTIONS (filename '/home/pgsql/sample_csv_data2.csv', format
'csv', delimiter ',');
CREATE FOREIGN TABLE
postgres=# SELECT count(*) FROM tab1;
  count
-
 500
(1 row)

postgres=# SELECT count(*) FROM tab2;
  count
-
 500
(1 row)

postgres=# EXPLAIN ANALYZE SELECT count(*) FROM tab1, tab2 WHERE
tab1.aid >= 0 AND tab1.aid <= 1 AND tab1.aid = tab2.aid;
   QUERY
PLAN

-

---
 Aggregate  (cost=128859182.29..128859182.30 rows=1 width=0) (actual
time=27321.304..27321.304 rows=1 loops=1)
   ->  Merge Join  (cost=5787102.68..111283426.33 rows=7030302383
width=0) (actual time=22181.428..26736.194 rows=4999745 loops=1)
 Merge Cond: (tab1.aid = tab2.aid)
 ->  Sort  (cost=1857986.37..1858198.83 rows=84983 width=4)
(actual time=5964.282..5965.958 rows=1 loops=1)
   Sort Key: tab1.aid
   Sort Method: quicksort  Memory: 853kB
   ->  Foreign Scan on tab1  (cost=0.00..1851028.44
rows=84983 width=4) (actual time=0.071..5962.382 rows=1 loops=1)
 Filter: ((aid >= 0) AND (aid <= 1))
 Foreign File: /home/pgsql/sample_csv_data1.csv
 Foreign File Size: 54396
 ->  Materialize  (cost=3929116.30..4011842.29 rows=16545197
width=4) (actual time=16216.953..19550.846 rows=500 loops=1)
   ->  Sort  (cost=3929116.30..3970479.30 rows=16545197
width=4) (actual time=16216.947..18418.684 rows=500 loops=1)
 Sort Key: tab2.aid
 Sort Method: external merge  Disk: 68424kB
 ->  Foreign Scan on tab2  (cost=0.00..1719149.70
rows=16545197 width=4) (actual time=0.081..6059.630 rows=500 loops=1)
   Foreign File: /home/pgsql/sample_csv_data2.csv
   Foreign File Size: 529446313
 Total runtime: 27350.673 ms
(18 rows)

[Patched]
postgres=# CREATE FOREIGN TABLE tab1 (aid INTEGER, msg text) SERVER
file_fs OPTIONS (filename '/home/pgsql/sample_csv_data1.csv', format
'csv', delimiter ',');
CREATE FOREIGN TABLE
postgres=# CREATE FOREIGN TABLE tab2 (aid INTEGER, msg text) SERVER
file_fs OPTIONS (filename '/home/pgsql/sample_csv_data2.csv', format
'csv', delimiter ',');
CREATE FOREIGN TABLE
postgres=# ANALYZE VERBOSE tab1;
INFO:  analyzing "public.tab1"
INFO:  "tab1": scanned, containing 500 rows; 3 rows in sa

Re: [HACKERS] superusers are members of all roles?

2011-09-12 Thread Stephen Frost
* Andrew Dunstan (and...@dunslane.net) wrote:
> It's NOT changing that. All this affects is how +groupname is
> treated in pg_hba.conf, i.e. do we treat every superuser there as
> being a member of every group.

Ah, sorry for the noise, that's fine (and I'm bit suprised it was a
one-liner, guess I should go look at the patch... ;).

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] [v9.2] "database" object class of contrib/sepgsql

2011-09-12 Thread Kohei KaiGai
The attached patch is a portion that we splitted off when we added
pg_shseclabel system catalog.

It enables the control/sepgsql to assign security label on pg_database
objects that are utilized as a basis to compute a default security
label of schema object.
Currently, we have an ugly assumption that all the pg_database entries
are labeled as "system_u:object_r:sepgsql_db_t:s0", and default
security label of schema is computed based on this assumption. See,
sepgsql_schema_post_create() in sepgsql/schema.c

It also enables initial labeling at sepgsql_restorecon() and
permission checks on relabeling, however, nothing are checked any
more.

Thanks,
-- 
KaiGai Kohei 


pgsql-v9.2-sepgsql-database.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql additions

2011-09-12 Thread Andrew Dunstan



On 09/11/2011 11:43 PM, Robert Haas wrote:




Second, I'd like to be able to set a minimum number of lines below which the
pager would not be used, something like:

   \pset pagerminlines 200

Thoughts?

Gee, why do I feel like we have something like this already?


We do? We control columns, but not lines AFAIK.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha 1 for 9.2

2011-09-12 Thread Dave Page
On Sat, Sep 10, 2011 at 9:47 PM, Joshua Berkus  wrote:
>
>> Download numbers for the installers were bordering on noise compared
>> to the GA builds last time I looked, double figures iirc. I don't
>> know about the tarballs offhand and can't check ATM.
>
> Can you check when you get a chance?   I know that the DL numbers for the 
> first alphas were very low, but I'm wondering about Alpha 3, 4 and 5.

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.1alpha1.tar.%' and ts >= '2009-09-01';
 count
---
  1431
(1 row)

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.1alpha2.tar.%' and ts >= '2009-09-01';
 count
---
  1335
(1 row)

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.1alpha3.tar.%' and ts >= '2009-09-01';
 count
---
  1127
(1 row)

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.1alpha4.tar.%' and ts >= '2009-09-01';
 count
---
  2011
(1 row)

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.1alpha5.tar.%' and ts >= '2009-09-01';
 count
---
   929
(1 row)

and for comparison:

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.0.3.tar.%' and ts >= '2009-09-01';
 count
---
 26211
(1 row)

186_www=#  select count(*) from clickthrus where path like
'%postgresql-9.0.4.tar.%' and ts >= '2009-09-01';
 count
---
 34769
(1 row)

Note that these are only numbers from people who click through the
flags pages on the website. We don't have numbers for people who
download directly from the FTP site or a mirror.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] prepare plans of embedded sql on function start

2011-09-12 Thread Pavel Stehule
Hello

I started work on proposed check statement option and there are a few questions?

what is sense of this statement for others PL? When we solve a mainly
PL/pgSQL issue, has sense to implement new statement? Isn't a some
problem in our CREATE FUNCTION design? A separation to two steps
should has a little bit strange behave - we cannot to check a function
before their registration (we can, but we should to do a some game
with names) - there is necessary some a conditional CREATE FUNCTION
statement - some like "CREATE CHECKED FUNCTION " or CHECK FUNCTION
with function body.

comments?

Regards

Pavel Stehule


2011/9/11 Tom Lane :
> Andy Colson  writes:
>> [ Andy's dubious about adding plpgsql syntax to control this feature ]
>
> Yeah, that bothers me a lot too.
>
>> One option I'd thought about, was to extended ANALYZE to support functions.
>
> That's actually quite a good idea, not least because the extra checking
> happens only when you ask for it and not every time the function is
> loaded into a new session.
>
> I'm not that happy with overloading the ANALYZE keyword to mean this
> (especially not since there is already meaning attached to the syntax
> "ANALYZE x(y)").  But we could certainly use some other name --- I'm
> inclined to suggest CHECK:
>
>        CHECK FUNCTION function_name(arglist);
>
> People would want some sort of wild card capability; at the very least
> "check all plpgsql functions owned by me".  Not sure what that ought
> to look like syntactically.
>
> It might also be a good idea to make sure there's room in the syntax to
> specify different checking options.  We already would have reason to
> want "just do the existing style of validation check" versus this more
> intensive check.  And it's not hard to foresee other sorts of checking
> in future.
>
> Also, this would force us to invent PL-independent infrastructure for
> doing the checking.  I'm envisioning an additional argument to the
> existing PL validator function that tells it what checking options to
> use.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN and nfiltered, take two

2011-09-12 Thread Marko Tiikkaja

Hi,

On 2011-09-10 19:50, Marti Raudsepp wrote:

I tried this patch and noticed something weird. This is probably not
intentional:


Indeed, it is not intentional.  Will see how I can fix this.

Thank you for trying the patch out!


--
Marko Tiikkajahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers