Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Albe Laurenz
Tom Dunstan wrote:
 The Problem
 -
 One case that traditional SQL doesn't handle very well is when you have a 
 child entity which can be
 attached to a number of different parent entities. Examples might be 
 comments, tags or file
 attachments - we might have 20 different entities in the system that we would 
 like our users to be
 able add comments to, but the existing solutions for mapping this all have 
 downsides.
 
 Existing solution 1: Join tables ahoy
[...]
 
 Existing solution 2: Enter the matrix
[...]
 
 Existing solution 3: Embed the matrix
[...]
 
 Existing solution 4: Abandon ye all referential integrity
[...]
 
 Existing solution 5: Everything's a thing
[...]
 
 Basically none of the above handle the situation very well. The cleanest is 
 solution 4, but lack of RI
 sucks.

I personally think that options 3 and 1 are the cleanest ones, but I
agree that they are not entirely satisfying.

I could think of a sixth option:
add a nullable column to each table that need a comment, tag or whatever.
You could use the same application logic for each of these columns, but
particularly for things that are more complicated than mere comments it
might be nice to have them centralized in one table.

 Feature Proposal: Selective foreign keys.
 -
 Allow foreign keys to have where clauses. The above comment example using 
 solution 4 might then look
 like then following:
 
 CREATE TABLE comment as (
   id bigserial primary key,
   content text not null,
   parent_entity regclass not null,
   parent_id int8
 );
 ALTER TABLE comment ADD CONSTRAINT comment_blog_fk FOREIGN KEY (parent_id) 
 REFERENCES blog(id) WHERE
 (parent_entity = ‘blog');
 ALTER TABLE comment ADD CONSTRAINT comment_event_fk FOREIGN KEY (parent_id) 
 REFERENCES event(id) WHERE
 (parent_entity = ‘event');

 Comments?

I didn't read the patch and I cannot comment on how easy it would be
to implement this and what the performance impact might be.

What strikes me is that since foreign key constraints are implemented
as triggers in PostgreSQL, this solution would probably not have many
performance benefits over a self-written trigger that implements the
same functionality.  Since you need two triggers for your example,
the performance might even be worse than a single self-written trigger.

Now performance isn't everything, but that would mean that the benefit
of your proposal is entirely on the usability side.

I personally don't think that it is so difficult to write a trigger
for that functionality yourself, but I guess that the argument for
this feature rests on how coveted such a functionality would be
(to justify the trade-off in code complexity).

Maybe one measure would be to figure out if any other relational
database system has implemented such a functionality.  If there is
more than one, it might show that there is a certain demand for such
a feature.

Of course that's only circumstantial evidence; I guess that a
better measure would be how many people speak up and say
I have always wanted that.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Heikki Linnakangas

On 12/02/2013 05:34 AM, Stephen Frost wrote:

* Jeff Davis (pg...@j-davis.com) wrote:

I see where you're coming from, but after some thought, and looking at
the patch, I think we really do want a catalog representation for (at
least some) extensions.


Perhaps I'm missing something- but we already *have* a catalog
representation for every extension that's ever installed into a given
database.  A representation that's a heck of a lot better than a big
text blob.


Right. I think Jeff was thinking of a catalog representation for 
extensions that haven't been installed yet, but are available in the 
system and could be installed with CREATE EXTENSION foo. I wouldn't mind 
having a catalog like that. Even without any of this extension template 
stuff, it would be handy to have a view that lists all the extensions 
available in the filesystem.



   2. When 9.4 gets released, we need some solid advice for extension
authors. If they have a native shared library, I assume we just tell
them to keep using the file-based templates. But if they have a SQL-only
extension, do we tell them to port to the in-catalog templates? What if
they port to in-catalog templates, and then decide they just want to
optimize one function by writing it in native code? Do they have to port
back? What should the authors of SQL-only extensions distribute on PGXN?
Should there be a migration period where they offer both kinds of
templates until they drop support for 9.3?


This is one of the main things that I think Heikki was trying to drive
at with his comment- we really don't *want* to make extension authors
have to do anything different than what they do today.  With an external
tool, they wouldn't need to and it would just be two different ways for
an extension to be installed into a given database.  In the end though,
if we're telling people to 'port' their extensions, then I think we've
already lost.


Exactly.

There should be no difference between file-based extensions and 
catalog-based extensions. It's just two different ways to install the 
same extension. The extension author doesn't need to care about that, 
it's the DBA that decides which method to use to install it.


I'm going to object loudly to any proposal that doesn't meet that criteria.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Heikki Linnakangas

On 12/02/2013 08:40 AM, Michael Paquier wrote:

The data replication bug causing data corruption on hot slaves found
lately (http://wiki.postgresql.org/wiki/Nov2013ReplicationIssue) is
causing a certain amount of damage among the users of Postgres, either
companies or individuals, and impacts a lot of people. So perhaps it
would be a good time to start thinking about adding some dedicated
regression tests for replication, archiving, PITR, data integrity,
parameter reloading, etc. The possible number of things or use-cases
that could be applied to that is very vast, but let's say that it
involves the manipulation of multiple Postgres nodes and structures
(like a WAL archive) defining a Postgres cluster.

The main purpose of those test would be really simple: using a given
GIT repository, a buildfarm or developer should be able to run a
single make check command that runs those tests on a local machine
in a fashion similar to isolation or regression tests and validate
builds or patches.


+1. The need for such a test suite has been mentioned every single time 
that a bug or new feature related to replication, PITR or hot standby 
has come up. So yes please! The only thing missing is someone to 
actually write the thing. So if you have the time and energy, that'd be 
great!


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Handling GIN incomplete splits

2013-12-02 Thread Heikki Linnakangas

On 12/01/2013 10:40 PM, Jeff Janes wrote:

On Wed, Nov 27, 2013 at 9:40 AM, Jeff Janes jeff.ja...@gmail.com wrote:


The commit 04eee1fa9ee80dabf7 of this series causes a self-deadlock in the
LWLock code during the operation below, with it trying to take
an LW_EXCLUSIVE on a high, even-numbered lockid when it already holds the
same lockid.

CREATE INDEX planet_osm_ways_nodes ON planet_osm_ways USING gin (nodes)
  WITH (FASTUPDATE=OFF);

It happens pretty reliably using osm2pgsql.

I will try to come up with a simple reproducible demonstration, and stack
trace, over the weekend.


Whatever the problem, it seems to have been fixed in ce5326eed386959aa,
More GIN refactoring.


That's good, I guess :-). Thanks for the testing. Did you import the 
full planet.osm? I tried with a subset containing just Finland, but 
didn't see any problems.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Andres Freund
On 2013-12-02 11:07:28 +0200, Heikki Linnakangas wrote:
 Perhaps I'm missing something- but we already *have* a catalog
 representation for every extension that's ever installed into a given
 database.  A representation that's a heck of a lot better than a big
 text blob.
 
 Right. I think Jeff was thinking of a catalog representation for extensions
 that haven't been installed yet, but are available in the system and could
 be installed with CREATE EXTENSION foo. I wouldn't mind having a catalog
 like that. Even without any of this extension template stuff, it would be
 handy to have a view that lists all the extensions available in the
 filesystem.

Luckily that's already there: SELECT * FROM pg_available_extensions;

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Michael Paquier
On Mon, Dec 2, 2013 at 6:24 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 +1. The need for such a test suite has been mentioned every single time that
 a bug or new feature related to replication, PITR or hot standby has come
 up. So yes please! The only thing missing is someone to actually write the
 thing. So if you have the time and energy, that'd be great!
I am sure you know who we need to convince in this case :)
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Andres Freund
On 2013-12-02 08:57:01 +, Albe Laurenz wrote:
 What strikes me is that since foreign key constraints are implemented
 as triggers in PostgreSQL, this solution would probably not have many
 performance benefits over a self-written trigger that implements the
 same functionality.  Since you need two triggers for your example,
 the performance might even be worse than a single self-written trigger.

Note that you cannot really write correct RI triggers without playing
very low level games, i.e. writing C and using special kinds of
snapshots and such.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Andres Freund
On 2013-12-02 18:45:37 +0900, Michael Paquier wrote:
 On Mon, Dec 2, 2013 at 6:24 PM, Heikki Linnakangas
 hlinnakan...@vmware.com wrote:
  +1. The need for such a test suite has been mentioned every single time that
  a bug or new feature related to replication, PITR or hot standby has come
  up. So yes please! The only thing missing is someone to actually write the
  thing. So if you have the time and energy, that'd be great!

 I am sure you know who we need to convince in this case :)

If you're alluding to Tom, I'd guess he doesn't need to be convinced of
such a facility in general. I seem to remember him complaining about the
lack of testing that as well.
Maybe that it shouldn't be part of the main regression schedule...

+many from me as well. I think the big battle will be how to do it, not
if in general.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] strerror() returns ??? in a UTF-8/C database with LC_MESSAGES=non-ASCII

2013-12-02 Thread MauMau

Hi, Tom san,

From: Tom Lane t...@sss.pgh.pa.us

I've committed this with some editorialization (mostly, I used a case
statement not a constant array, because that's more like the other places
that switch on errnos in this file).

As I said, lack of %m string has been making troubleshooting difficult, 
so I

wish this to be backported at least 9.2.


I'm waiting to see whether the buildfarm likes this before considering
back-patching.


I'm very sorry to respond so late.  Thank you so much for committingthe 
patch.  I liked your code and comments.


I'll be glad if you could back-port this.  Personally, in practice, 9.1 and 
later will be sufficient.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] strerror() returns ??? in a UTF-8/C database with LC_MESSAGES=non-ASCII

2013-12-02 Thread Andres Freund
On 2013-12-02 19:36:01 +0900, MauMau wrote:
 I'll be glad if you could back-port this.  Personally, in practice, 9.1 and
 later will be sufficient.

Already happened:

Author: Tom Lane t...@sss.pgh.pa.us
Branch: REL9_3_STABLE [e3480438e] 2013-11-07 16:33:18 -0500
Branch: REL9_2_STABLE [64f5962fe] 2013-11-07 16:33:25 -0500
Branch: REL9_1_STABLE [8cfd4c6a1] 2013-11-07 16:33:28 -0500
Branch: REL9_0_STABLE [8103f49c1] 2013-11-07 16:33:34 -0500
Branch: REL8_4_STABLE [3eb777671] 2013-11-07 16:33:39 -0500

Be more robust when strerror() doesn't give a useful result.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Heikki Linnakangas hlinnakan...@vmware.com writes:
 Right. I think Jeff was thinking of a catalog representation for extensions
 that haven't been installed yet, but are available in the system and could
 be installed with CREATE EXTENSION foo. I wouldn't mind having a catalog
 like that. Even without any of this extension template stuff, it would be
 handy to have a view that lists all the extensions available in the
 filesystem.

  http://www.postgresql.org/docs/9.1/static/view-pg-available-extensions.html
  
http://www.postgresql.org/docs/9.1/static/view-pg-available-extension-versions.html

 There should be no difference between file-based extensions and
 catalog-based extensions. It's just two different ways to install the same
 extension. The extension author doesn't need to care about that, it's the
 DBA that decides which method to use to install it.

Agreed.

 I'm going to object loudly to any proposal that doesn't meet that criteria.

Please be kind enough to poin me where my current patch is drifting away
from that criteria. What you're proposing here is what I think I have
been implementing.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Michael Paquier
On Mon, Dec 2, 2013 at 7:07 PM, Andres Freund and...@2ndquadrant.com wrote:
 Maybe that it shouldn't be part of the main regression schedule...
Yes, like isolation tests, it don't see those new tests in the main
flow as well.

 +many from me as well. I think the big battle will be how to do it, not
 if in general.
Yeah, that's why we should gather first feedback about the methods
that other projects (Slony, Londiste, Pgpool) are using as well before
heading to a solution or another. Having smth, whatever small for 9.4
would also be of great help.

I am however sure that getting a small prototype integrated with some
of my in-house scripts would not take that much time though...
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Florian Pflug
On Dec2, 2013, at 10:06 , Andres Freund and...@2ndquadrant.com wrote:
 On 2013-12-02 08:57:01 +, Albe Laurenz wrote:
 What strikes me is that since foreign key constraints are implemented
 as triggers in PostgreSQL, this solution would probably not have many
 performance benefits over a self-written trigger that implements the
 same functionality.  Since you need two triggers for your example,
 the performance might even be worse than a single self-written trigger.
 
 Note that you cannot really write correct RI triggers without playing
 very low level games, i.e. writing C and using special kinds of
 snapshots and such.

Very true. I'm unsure whether that's an argument in favour of extending
the built-in FK triggers, or to expose the necessary functionality at the
SQL level, though ;-)

I once tried to do the latter, in a way, by removing the need for the
cross-checking logic (which is the only real low-level game that the
built-in FK triggers play) altogether. That, unfortunately, didn't pan
out - it would have required enlarging the on-disk tuple size to be
able to record to last transaction which locked a tuple even after the
transaction completes.

A simpler way would be to provide a special command which enabled the
re-checking logic for ordinary query. Something like

  CONSTRAINT UPDATE table SET … WHERE ...
  CONSTRAINT DELETE FROM table WHERE …

which would execute the command with a cross-check snapshot just like
ri_trigger.c's ri_PerformCheck() does.

best regards,
Florian Pflug

 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Sergey Burladyan
On Mon, Dec 2, 2013 at 4:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andres Freund and...@2ndquadrant.com writes:
  On 2013-12-01 18:56:19 -0500, Tom Lane wrote:
  I'd like to do any required editing on the notes at this stage,
  before I start extracting relevant subsets for the older branches.

  When do you plan to backpatch the documentation?

 Tomorrow afternoon (my time).

 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


Hello!

Is it possible to fix my surname in changelog?
-Sergey Burladyn
+Sergey Burladyan

it is not a problem if it is impossible :-)

Thanks!

-- 
Sergey Burladyan


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Andres Freund
On 2013-12-02 12:10:32 +, Florian Pflug wrote:
 On Dec2, 2013, at 10:06 , Andres Freund and...@2ndquadrant.com wrote:
  On 2013-12-02 08:57:01 +, Albe Laurenz wrote:
  What strikes me is that since foreign key constraints are implemented
  as triggers in PostgreSQL, this solution would probably not have many
  performance benefits over a self-written trigger that implements the
  same functionality.  Since you need two triggers for your example,
  the performance might even be worse than a single self-written trigger.
  
  Note that you cannot really write correct RI triggers without playing
  very low level games, i.e. writing C and using special kinds of
  snapshots and such.
 
 Very true. I'm unsure whether that's an argument in favour of extending
 the built-in FK triggers, or to expose the necessary functionality at the
 SQL level, though ;-)

Pretty much neither ;). I was just commenting on the fact that I don't
think Albe's argument has much merit in the current state of
postgresql. I haven't thought sufficiently thought about the issue to
have a clear plan what I think is right.

 I once tried to do the latter, in a way, by removing the need for the
 cross-checking logic (which is the only real low-level game that the
 built-in FK triggers play) altogether. That, unfortunately, didn't pan
 out - it would have required enlarging the on-disk tuple size to be
 able to record to last transaction which locked a tuple even after the
 transaction completes.

That infrastructure kinda is there now though, in the form of multixacts
we have now. I haven't followed the idea back then, but maybe that could
be reused.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Tom Dunstan
Hi Laurenz!

On 2 Dec 2013, at 19:27, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 What strikes me is that since foreign key constraints are implemented
 as triggers in PostgreSQL, this solution would probably not have many
 performance benefits over a self-written trigger that implements the
 same functionality.  Since you need two triggers for your example,
 the performance might even be worse than a single self-written trigger.

Well, the main cost on insert in the FK table should be looking for matching 
rows in the referenced tables, which the patch avoids for non-matching rows. So 
while you’ll get the overhead of N triggers firing, you only pay the expected 
query cost (which will even use a partial index if you’ve got one set up). Each 
of the referenced tables is only involved in one FK, so there’s no difference 
in cost there.

 Now performance isn't everything, but that would mean that the benefit
 of your proposal is entirely on the usability side.

Well, I don’t think there’s much of a performance hit, and I don’t think any of 
the alternatives would perform much better in practice, but certainly 
performance wasn’t  a motivating factor for this feature, it was a) correctness 
and b) avoiding the ugliness of the existing solutions. 

 I personally don't think that it is so difficult to write a trigger
 for that functionality yourself, but I guess that the argument for
 this feature rests on how coveted such a functionality would be
 (to justify the trade-off in code complexity).

The patch is pretty small so far - and more than half of it is regression 
tests. So there’s not much extra code complexity IMO. I wouldn’t want to touch 
the FK system with anything but the lightest touch.

Cheers

Tom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DATE type output does not follow datestyle parameter

2013-12-02 Thread MauMau

Hi, Bruce san,

From: Bruce Momjian br...@momjian.us

On Tue, Aug  6, 2013 at 12:09:53PM -0400, Tom Lane wrote:

Yes.  I don't see any reason to change it, either, as nobody has
complained that it's actually bad.  If you feel a compulsion to
change the docs, do that.


OK, seems 'Postgres' is a unique output format for 'date' too, even though
it doesn't look like the 'Postgres' timestamp output:

default
SET datestyle = 'ISO, MDY'; SELECT current_timestamp, current_date;
SET
  now  |date
---+
2013-08-06 16:18:48.218555-04 | 2013-08-06

SET datestyle = 'SQL, MDY'; SELECT current_timestamp, current_date;
SET
  now   |date
+
08/06/2013 16:18:43.054488 EDT | 08/06/2013

SET datestyle = 'German, MDY'; SELECT current_timestamp, current_date;
SET
  now   |date
+
06.08.2013 16:18:59.026553 EDT | 06.08.2013

MDY
SET datestyle = 'Postgres, MDY'; SELECT current_timestamp, current_date;
SET
 now |date
-+
Tue Aug 06 16:18:53.590548 2013 EDT | 08-06-2013

DMY
SET datestyle = 'Postgres, DMY'; SELECT current_timestamp, current_date;
SET
 now |date
-+
Tue 06 Aug 16:20:23.902549 2013 EDT | 06-08-2013

I don't think there is even a documentation change I can suggest.


I'm sorry I didn't respond for a long time.  I've come up with a suggestion.

The original reporter of this problem expected the output of the date type 
in 'Postgres,DMY' style to be 17 Dec 1997, when the output of the 
timestamp if Wed 17 Dec 07:37:16 1997 PST.  He thought so by reading the 
following sentence:


http://www.postgresql.org/docs/current/static/datatype-datetime.html
The output of the date and time types is of course only the date or time 
part in accordance with the given examples.


However, the actual output of the date type in Postgres style is:

* 12-17-1997 if datestyle is 'Postgres,YMD' or 'Postgres,MDY'
* 17-12-1997 if datestyle is 'Postgres,DMY'

So, my suggestion is to just add the following sentence right after the 
above one.


The Postgres style is an exception: the output of the date type is either 
MM-DD- (e.g. 12-17-1997) or DD-MM- (e.g. 17-12-1997), which is 
different from the date part of the output of the timestamp type.


Could you consider and add this to the manual?

Regards
MauMau





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Backup throttling

2013-12-02 Thread Boszormenyi Zoltan

Hi,

I am reviewing your patch.

2013-10-10 15:32 keltezéssel, Antonin Houska írta:

On 10/09/2013 08:56 PM, Robert Haas wrote:

There seem to be several review comments made since you posted this
version.  I'll mark this Waiting on Author in the CommitFest
application, since it seems that the patch needs to be further
updated.

Since it was waiting for reviewer, I was not sure whether I should wait
for his findings and fix everything in a single transaction.
Nevertheless, the next version is attached here.

It fixes findings reported in
http://www.postgresql.org/message-id/20130903165652.gc5...@eldon.alvh.no-ip.org

As for units
http://www.postgresql.org/message-id/20130903224127.gd7...@awork2.anarazel.de
I'm not convinced about MB at the moment. Unfortunately I couldn't
find any other command-line PG utility requiring amount of data as an
option. Thus I use single-letter suffix, just as wget does for the same
purposes.

As for this
http://www.postgresql.org/message-id/20130903125155.ga18...@awork2.anarazel.de
there eventually seems to be a consensus (I notice now the discussion
was off-list):


On 2013-09-03 23:21:57 +0200, Antonin Houska wrote:

On 09/03/2013 02:51 PM, Andres Freund wrote:


It's probably better to use latches for the waiting, those have properly
defined interruption semantics. Whether pg_usleep will be interrupted is
platform dependant...

I noticed a comment about interruptions around the definition of
pg_usleep() function, but concluded that the sleeps are rather frequent
in this applications (typically in the order of tens to hundreds per
second, although the maximum of 256 might need to be decreased).
Therefore occasional interruptions shouldn't distort the actual rate
much. I'll think about it again. Thanks,

The issue is rather that you might not get woken up when you want to
be. Signal handlers in postgres tend to do a
SetLatch(MyProc-procLatch); which then will interrupt sleeps done via
WaitLatch(). It's probably not that important with the duration of the
sleeps you have.

Greetings,

Andres Freund

// Antonin Houska (Tony)


* Is the patch in a patch format which has context?

Yes

* Does it apply cleanly to the current git master?

It applies with some offsets. Version 3a that applies cleanly is attached.

* Does it include reasonable tests, necessary doc patches, etc?

Docs: yes. Tests: N/A

* Does the patch actually implement what it advertises?

Yes.

* Do we want that?

Yes.

* Do we already have it?

No.

* Does it follow SQL spec, or the community-agreed behavior?

No such SQL spec. The latest patch fixed all previously raised comments.

* Does it include pg_dump support (if applicable)?

N/A

* Are there dangers?

Yes, the danger of slowing down taking a base backup.
But this is the point.

* Have all the bases been covered?

Yes.

* Does the feature work as advertised?

Yes.

* Are there corner cases the author has failed to consider?

No.

* Are there any assertion failures or crashes?

No.

* Does the patch slow down simple tests?

No.

* If it claims to improve performance, does it?

N/A

* Does it slow down other things?

No.

* Does it follow the project coding guidelines?

Yes. A nitpicking: this else branch below might need brackets
because there is also a comment in that branch:

+   /* The 'real data' starts now (header was ignored). */
+   throttled_last = GetCurrentIntegerTimestamp();
+   }
+   else
+   /* Disable throttling. */
+   throttling_counter = -1;
+

* Are there portability issues?

No.

* Will it work on Windows/BSD etc?

It should, there are no dangerous calls besides the above mentioned
pg_usleep(). But waking up from pg_usleep() earlier makes rate limiting
fluctuate slightly, not fail.

* Are the comments sufficient and accurate?

Yes.

* Does it do what it says, correctly?

Yes.

Although it should be mentioned in the docs that rate limiting
applies to walsenders individually, not globally. I tried this
on a freshly created database:

$ time pg_basebackup -D data2 -r 1M -X stream -h 127.0.0.1
real0m26.508s
user0m0.054s
sys0m0.360s

The source database had 3 WAL files in pg_xlog, one of them was
also streamed. The final size of data2 was 43MB or 26MB without pg_xlog,
i.e. without the -X stream option. The backup used 2 walsenders
in parallel (one for WAL) which is a known feature.

* Does it produce compiler warnings?

No.

*Can you make it crash?

No.

Consider the changes to the code in the context of the project as a whole:

* Is everything done in a way that fits together coherently with other 
features/modules?

Yes.

* Are there interdependencies that can cause problems?

No.

Another note. This chunk should be submitted separately as a comment bugfix:

diff --git a/src/backend/utils/adt/timestamp.c 
b/src/backend/utils/adt/timestamp.c
index c3c71b7..5736fd8 100644
--- a/src/backend/utils/adt/timestamp.c
+++ 

Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
  Having a versioning notion (and whatever other meta data we, or an
  extension author, feels is useful) for what are otherwise simple containers
  (aka the schematic we already have..) makes sense and it would be great to
  provide support around that, but not this duplication of
  object definitions.
 
 I don't like duplication either, we've just been failing to find any
 alternative with pg_restore support for the last 3 years.

 *That doesn't make this approach the right one*.  If anything, I'm
 afraid we've ended up building ourselves a rube goldberg machine because
 of this constant struggle to fit a square peg into a round hole.

This duplication you're talking about only applies to CREATE EXTENSION.

I don't know of any ways to implement ALTER EXTENSION … UPDATE …
behaviour without a separate set of scripts to apply in a certain order
depending on the current and target versions of the extension.

If you know how to enable a DBA to update a set of objects in a database
only with information already found in the database, and in a way that
this information is actually *not* an SQL script, I'm all ears.

 That's basically what we already do with schemas today and hence is
 pretty darn close to what I'm proposing.  Perhaps it'd be a way to
 simply version schemas themselves- heck, with that, we could even
 provide that oft-asked-for schema delta tool in-core by being able to
 deduce the differences between schema at version X and schema at
 version Y.

Given that at any moment you have a single version of the schema
installed, I don't know how you're supposed to be able to do that?

Maybe you mean by tracking the changes at update time? Well that at
least would be a good incentive to have Command String access in event
triggers, I guess.

 That would work beautifully, and of course you would have to do that
 again manually at pg_restore time after CREATE DATABASE and before
 pg_restore, or you would need to change the fact that extensions objects
 are not part of your pg_dump scripts, or you would have to name your new
 thing something else than an extension.

 We would need a way to dump and restore this, of course.

Which is available in the current patch, of course.

 Having a management system for sets of objects is a *great* idea- and
 one which we already have through schemas.  What we don't have is any
 kind of versioning system built-in or other metadata about it, nor do we
 have good tooling which leverages such a versioning or similar system.

Exactly.

How can we implement ALTER OBJECT … UPDATE TO VERSION without having
access to some SQL scripts?

The current patch offers a way to manage those scripts and apply them,
with the idea that the people managing the scripts (extension authors)
and the people applying them (DBAs) are not going to be the same people,
and that it's then possible to have to apply more than a single script
for a single UPDATE command.

 I really just don't see this as being either particularly useful nor
 feasible within a reasonable amount of effort.  Shared libraries are
 really the perview of the OS packaging system.  If you want to build
 some tool which is external to PG but helps facilitate the building and
 installing of shared libraries, but doesn't use the OS packaging system
 (and, instead, attempts to duplicate it) then go for it, but don't
 expect to ship or install that through the PG backend.

I'll give you that implementing Event Triggers just to be able to build
what you're talking about on top of it and out of core might not be
called “a reasonable amount of effort.”

 The problem found here is that if a non privileged user installs an
 extension template named “pgcyrpto” then the superuser installs what he
 believes is the extension “pgcrypto”, the malicious unprivileged user
 now is running his own code (extension install script) as a superuser.

 For my part, the problem here is this notion of extension templates in
 the PG catalog and this is just one symptom of how that's really not a
 good approach.

The only reason for that being the case is that you suppose that root on
the file system is more trustworthy as an entity than postgres on the
file system or any superuser in the PostgreSQL service.

As soon as you question that, then you might come to realise the only
difference in between file-system templates and catalog templates is our
ability to deal with the problem, rather than the problem itself.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Heikki Linnakangas (hlinnakan...@vmware.com) wrote:
 Right. I think Jeff was thinking of a catalog representation for
 extensions that haven't been installed yet, but are available in the
 system and could be installed with CREATE EXTENSION foo.

I really don't particularly see value in this unless it hooks into PGXN
or similar somehow (ala how an apt repository works).  I just don't see
the point if users have to install templates to then get a list of what
extensions they have available to install.  The whole 'extension
template' piece of this just ends up being overhead and gets in the way.

 I wouldn't
 mind having a catalog like that. Even without any of this extension
 template stuff, it would be handy to have a view that lists all the
 extensions available in the filesystem.

As mentioned, that's available for the filesystem-based extensions.

 There should be no difference between file-based extensions and
 catalog-based extensions. It's just two different ways to install
 the same extension. The extension author doesn't need to care about
 that, it's the DBA that decides which method to use to install it.
 
 I'm going to object loudly to any proposal that doesn't meet that criteria.

Right, which is why I think this is going to *have* to exist outside of
the backend as an independent tool which can simply install an extension
through normal libpq/PG object creation method- very similar to how
extension creation already happens, except that we're being fed from a
PG connection instead of reading in an SQL file from the filesystem.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
  There should be no difference between file-based extensions and
  catalog-based extensions. It's just two different ways to install the same
  extension. The extension author doesn't need to care about that, it's the
  DBA that decides which method to use to install it.
 
 Agreed.
 
  I'm going to object loudly to any proposal that doesn't meet that criteria.
 
 Please be kind enough to poin me where my current patch is drifting away
 from that criteria. What you're proposing here is what I think I have
 been implementing.

Perhaps you're seeing something down the road that I'm not, but I don't
see how what you're proposing with extension templates actually moves us
closer to this goal.

What is the next step to allow an extension pulled down from pgxn to be
installed, unchanged, into a given database?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 What is the next step to allow an extension pulled down from pgxn to be
 installed, unchanged, into a given database?

An extension packaging system.

Unchanged is not a goal, and not possible even today.

PGXN is a *source based* packaging system. You can't just install what's
in PGXN on the server's file system then CREATE EXTENSION, you have this
extra step called the “build”.

Whether you're targetting a file system template or a catalog template,
PGXN is not a complete solution, you still need to build the extension.

As I already mentionned in this thread, that's even true for SQL only
extensions today, have a look at this example:

  http://api.pgxn.org/src/mimeo/mimeo-1.0.1/
  http://api.pgxn.org/src/mimeo/mimeo-1.0.1/Makefile

So even as of today, given file based extension templates and PGXN,
there's something missing. You can find different client tools to help
you there, such as pgxn_client and pex:

  http://pgxnclient.projects.pgfoundry.org/
  https://github.com/petere/pex

What I want to build is an “extension distribution” software that knows
how to prepare anything from PGXN (and other places) so that it's fully
ready for being used in the database. Then the main client would run as
a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
prepared extension for you and make it available, then leaving the main
command operate as intended.

Which is what I think the pex extension is doing, and that's not
coincidental, but it runs the build step on the PostgreSQL server itself
and needs to have a non-trivial set of file-system privileges to be
doing so, and even needs to get root privileges with sudo for some of
its operations.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-12-02 Thread Amit Kapila
On Mon, Dec 2, 2013 at 7:40 PM, Haribabu kommi
haribabu.ko...@huawei.com wrote:
 On 29 November 2013 03:05 Robert Haas wrote:
 On Wed, Nov 27, 2013 at 9:31 AM, Amit Kapila amit.kapil...@gmail.com
 wrote:

 I tried modifying the existing patch to support the dynamic rollup as follows.
 For every 32 bytes mismatch between the old and new tuple and it resets back 
 whenever it found a match.

 1. pglz-with-micro-optimization-compress-using-newdata-5:

 Adds all old tuple data to history and then check for the match from new 
 tuple.
 For every 32 bytes mismatch, it checks for the match for 2 bytes once. Like 
 this
 It repeats until it found a match or end of data.

 2. pglz-with-micro-optimization-compress-using-newdata_snappy_hash-1:

 Adds only first byte of old tuple data to the history and then check for the 
 match
 From new tuple. If any match found, then next unmatched byte from old tuple 
 is added
 To the history and repeats the process.

 If no match founds then adds the next byte of the old tuple history followed 
 by the
 Unmatched byte from new tuple data to the history.

 In this case the performance is good, but if there is any forward references 
 in the
 New data with old data then it will not compress the data.

The performance data has still same problem that is on fast disks
(tempfs data) it is low.
I am already doing chunk-wise implementation to see if it can improve
the situation, please wait
and then we can decide what is the best way to proceed.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  *That doesn't make this approach the right one*.  If anything, I'm
  afraid we've ended up building ourselves a rube goldberg machine because
  of this constant struggle to fit a square peg into a round hole.
 
 This duplication you're talking about only applies to CREATE EXTENSION.
 
 I don't know of any ways to implement ALTER EXTENSION … UPDATE …
 behaviour without a separate set of scripts to apply in a certain order
 depending on the current and target versions of the extension.

We've already got it in the form of how filesystem extensions work
today..

 If you know how to enable a DBA to update a set of objects in a database
 only with information already found in the database, and in a way that
 this information is actually *not* an SQL script, I'm all ears.

Clearly we need the information from the extension package (the scripts
which are on the PG server's filesystem today, but need not be in the
future) but that doesn't mean we need to keep those text blobs in the
catalog.

  That's basically what we already do with schemas today and hence is
  pretty darn close to what I'm proposing.  Perhaps it'd be a way to
  simply version schemas themselves- heck, with that, we could even
  provide that oft-asked-for schema delta tool in-core by being able to
  deduce the differences between schema at version X and schema at
  version Y.
 
 Given that at any moment you have a single version of the schema
 installed, I don't know how you're supposed to be able to do that?

*I am not trying to rebuild the entire extension package from the PG
catalog*.  I do not see the need to do so either.  Perhaps that's
short-sighted of me, but I don't think so; to go back to my dpkg
example, we don't store the source package in dpkg's database nor do
people generally feel the need to rebuild .deb's from the files which
are out on the filesystem (a non-trivial task though I suppose it might
be possible to do- but not for *every version* of the package..).

 Maybe you mean by tracking the changes at update time? Well that at
 least would be a good incentive to have Command String access in event
 triggers, I guess.

I don't see the need to track the changes at all.  We don't actually
track them in the database anywhere today...  We happen to have scripts
available on the filesystem which allow us to move between versions, but
they're entirely outside the catalog and that's where they belong.

  Having a management system for sets of objects is a *great* idea- and
  one which we already have through schemas.  What we don't have is any
  kind of versioning system built-in or other metadata about it, nor do we
  have good tooling which leverages such a versioning or similar system.
 
 Exactly.
 
 How can we implement ALTER OBJECT … UPDATE TO VERSION without having
 access to some SQL scripts?
 
 The current patch offers a way to manage those scripts and apply them,
 with the idea that the people managing the scripts (extension authors)
 and the people applying them (DBAs) are not going to be the same people,
 and that it's then possible to have to apply more than a single script
 for a single UPDATE command.

Extension authors are not going to be issuing updates to everyone's
catalogs directly to update their templates..  That's still going to be
the DBA, or some tool the DBA runs, job.  I'm argueing that such a tool
could actually do a lot more and work outside of the PG backend but
communicate through libpq.  As I see it, you're trying to build that
tool *into* the backend and while 'extension templates' might end up
there, I don't think you're going to get your wish when it comes to
having a PG backend reach out over the internet at the request of a
normal user, ever.

 As soon as you question that, then you might come to realise the only
 difference in between file-system templates and catalog templates is our
 ability to deal with the problem, rather than the problem itself.

I really think there's a good deal more to my concerns than that. :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Maybe that it shouldn't be part of the main regression schedule...

It *can't* be part of the main regression tests; those are supposed to
be runnable against an already-installed server, and fooling with that
server's configuration is off-limits too.  But I agree that some
other facility to simplify running tests like this would be handy.

At the same time, I'm pretty skeptical that any simple regression-test
type facility would have caught the bugs we've fixed lately ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add full object name to the tag field

2013-12-02 Thread Asit Mahato
Hi all,

I am a newbie. I am unable to understand the to do statement given below.

Add full object name to the tag field. eg. for operators we need
'=(integer, integer)', instead of just '='.

please help me out with an example.

Thanks and Regards,
*Asit Mahato*


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Andres Freund
On 2013-12-02 09:41:39 -0500, Tom Lane wrote:
 At the same time, I'm pretty skeptical that any simple regression-test
 type facility would have caught the bugs we've fixed lately ...

Agreed, but it would make reorganizing stuff to be more robust more
realistic. At the moment for everything you change you have to hand-test
everyting possibly affected which takes ages.

I think we also needs support for testing xid/multixid wraparound. It
currently isn't realistically testable because of the timeframes
involved.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 I think we also needs support for testing xid/multixid wraparound. It
 currently isn't realistically testable because of the timeframes
 involved.

When I've wanted to do that in the past, I've used pg_resetxlog to
adjust a cluster's counters.  It still requires some manual hacking
though because pg_resetxlog isn't bright enough to create the new
pg_clog files needed when you move the xid counter a long way.
We could fix that, or we could make the backend more forgiving of
not finding the initial clog segment present at startup ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  What is the next step to allow an extension pulled down from pgxn to be
  installed, unchanged, into a given database?
 
 An extension packaging system.
 
 Unchanged is not a goal, and not possible even today.

I'm not convinced of that, actually, but you do raise a good point.

 PGXN is a *source based* packaging system. You can't just install what's
 in PGXN on the server's file system then CREATE EXTENSION, you have this
 extra step called the “build”.

Fine- so we need a step that goes from 'source' to 'built'.  I don't see
that step being done in or by a PG backend process.  Adding a new option
which can take a pgxn source and build a script from it which can be run
against PG via libpq is what I'd be going for- but that script *just
installs (or perhaps upgrades) the extension.*  There's no need for that
script, or various upgrade/downgrade/whatever scripts, to be sucked
wholesale into the PG catalog.

 What I want to build is an “extension distribution” software that knows
 how to prepare anything from PGXN (and other places) so that it's fully
 ready for being used in the database. Then the main client would run as
 a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
 prepared extension for you and make it available, then leaving the main
 command operate as intended.

I really don't think that's a good approach.

 Which is what I think the pex extension is doing, and that's not
 coincidental, but it runs the build step on the PostgreSQL server itself
 and needs to have a non-trivial set of file-system privileges to be
 doing so, and even needs to get root privileges with sudo for some of
 its operations.

pex is an interesting beginning to this, but we'd need *some* backend
support for being able to install the extension via libpq (or pex would
need to be modified to not actually use our extension framework at
all for 'trusted' extensions...).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Heikki Linnakangas

On 12/02/2013 04:14 PM, Dimitri Fontaine wrote:

Stephen Frost sfr...@snowman.net writes:

What is the next step to allow an extension pulled down from pgxn to be
installed, unchanged, into a given database?


An extension packaging system.

Unchanged is not a goal, and not possible even today.

PGXN is a *source based* packaging system. You can't just install what's
in PGXN on the server's file system then CREATE EXTENSION, you have this
extra step called the “build”.

Whether you're targetting a file system template or a catalog template,
PGXN is not a complete solution, you still need to build the extension.


So? Just make; make install and you're done. Or apt-get install foo.


What I want to build is an “extension distribution” software that knows
how to prepare anything from PGXN (and other places) so that it's fully
ready for being used in the database.


You mean, something to replace make install if it's not installed on 
the server? Fair enough. You could probably write a little perl script 
to parse simple Makefiles that only copy a few static files in place. Or 
add a flag to the control file indicating that the extension follows a 
standard layout, and doesn't need a make step.


I fear we're wandering off the point again. So let me repeat: It must be 
possible to install the same extension the way you do today, and using 
the new mechanism.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 Clearly we need the information from the extension package (the scripts
 which are on the PG server's filesystem today, but need not be in the
 future) but that doesn't mean we need to keep those text blobs in the
 catalog.

So, I guess it would have been good to hear about that about a year ago:

  http://www.postgresql.org/message-id/13481.1354743...@sss.pgh.pa.us
  http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us

We could have CREATE TEMPLATE FOR EXTENSION store the scripts into some
files in PGDATA instead of the catalogs, but really I don't see the
point.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Andrew Dunstan


On 12/02/2013 05:06 AM, Andres Freund wrote:

On 2013-12-02 08:57:01 +, Albe Laurenz wrote:

What strikes me is that since foreign key constraints are implemented
as triggers in PostgreSQL, this solution would probably not have many
performance benefits over a self-written trigger that implements the
same functionality.  Since you need two triggers for your example,
the performance might even be worse than a single self-written trigger.

Note that you cannot really write correct RI triggers without playing
very low level games, i.e. writing C and using special kinds of
snapshots and such.



Yeah, I really don't think that's a feasible way to to this.

The only way I have thought of as an alternative to this proposal is to 
use a partitioned table with different FK constraints for each child. 
That's certainly doable, but not without a deal of work, and even then 
you'd be giving up certain things, such as guaranteeing the uniqueness 
of the object key, at least without a lot more work.


You can think of it this way: we currently enforce FK constraints except 
when the value being constrained is NULL (or part of it is NULL in the 
MATCH SIMPLE case). This is really a user-defined extension of the 
exception condition. I have at least one case where I could have used 
this feature and saved a significant amount of work. We wanted to apply 
FK constraints to a very large table, but grandfather in certain cases 
that didn't meet the constraint. That could have been done very simply 
using this feature.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 Fine- so we need a step that goes from 'source' to 'built'.  I don't see
 that step being done in or by a PG backend process.  Adding a new option
 which can take a pgxn source and build a script from it which can be run
 against PG via libpq is what I'd be going for- but that script *just
 installs (or perhaps upgrades) the extension.*  There's no need for that
 script, or various upgrade/downgrade/whatever scripts, to be sucked
 wholesale into the PG catalog.

As you said previously, we can't ask extension authors to control what
version of their extension is installed on which database, so we need a
way to cooperate with the backend in order to know how to operate the
update.

We can't just pull data out of the backend to do that, not until we've
been pushing the list of available versions and update scripts that we
have to be able to run the update.

That's were I though about pushing the whole thing down to the catalogs
and have the backend take control from there.

 What I want to build is an “extension distribution” software that knows
 how to prepare anything from PGXN (and other places) so that it's fully
 ready for being used in the database. Then the main client would run as
 a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
 prepared extension for you and make it available, then leaving the main
 command operate as intended.

 I really don't think that's a good approach.

What's your alternative? Goals are:

  - using the update abilities of the extension mechanism
  - no access to the server's file system needed
  - pg_restore does the right thing

I went for the whole set of extension abilities in my patch, you're
pushing hard for me to reduce that goal so I only included the ability
to manage version upgrades here.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Heikki Linnakangas hlinnakan...@vmware.com writes:
 I fear we're wandering off the point again. So let me repeat: It must be
 possible to install the same extension the way you do today, and using the
 new mechanism.

The way you do today is running make install or apt-get install or
something else to write files in the right place on the file system,
usually with root privileges.

The new mechanism tries to avoid using the file system *completely*.

Sorry. I don't understand what you mean other that “I don't want this
patch because I don't understand what it is about”.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Then as soon as we are able to CREATE EXTENSION mystuff; without ever
 pre-installing files on the file system as root, then we would like to
 be able to do just that even with binary modules.

 I really just don't see this as being either particularly useful nor
 feasible within a reasonable amount of effort.  Shared libraries are
 really the perview of the OS packaging system.

Yes, exactly.  What's more, you're going to face huge push-back from
vendors who are concerned about security (which is most of them).
If there were such a feature, it would end up disabled, one way or
another, in a large fraction of installations.  That would make it
impractical to use anyway for most extension authors.  I don't think
it's good project policy to fragment the user base that way.

I'm on board with the notion of an all-in-the-database extension
mechanism for extensions that consist solely of SQL objects.  But
not for ones that need a .so somewhere.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Tom Lane
Sergey Burladyan eshkin...@gmail.com writes:
 Is it possible to fix my surname in changelog?
 -Sergey Burladyn
 +Sergey Burladyan

Oh, sorry about that!  I can't do anything about the typo in the commit
log, but certainly we can get it right in the release notes.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Stephen Frost sfr...@snowman.net writes:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Then as soon as we are able to CREATE EXTENSION mystuff; without ever
 pre-installing files on the file system as root, then we would like to
 be able to do just that even with binary modules.

 I really just don't see this as being either particularly useful nor
 feasible within a reasonable amount of effort.  Shared libraries are
 really the perview of the OS packaging system.

 Yes, exactly.  What's more, you're going to face huge push-back from
 vendors who are concerned about security (which is most of them).

Last time I talked with vendors, they were working in the Open Shift
team at Red Hat, and they actually asked me to offer them the ability
you're refusing, to let them enable a better security model.

The way they use cgroups and SELinux means that they want to be able to
load shared binaries from system user places.

 If there were such a feature, it would end up disabled, one way or
 another, in a large fraction of installations.  That would make it
 impractical to use anyway for most extension authors.  I don't think
 it's good project policy to fragment the user base that way.

That point about fragmentation is a concern I share.

 I'm on board with the notion of an all-in-the-database extension
 mechanism for extensions that consist solely of SQL objects.  But
 not for ones that need a .so somewhere.

Thanks for restating your position.

The current patch offers a feature that only works with SQL objects,
it's currently completely useless as soon as there's a .so involved.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  Clearly we need the information from the extension package (the scripts
  which are on the PG server's filesystem today, but need not be in the
  future) but that doesn't mean we need to keep those text blobs in the
  catalog.
 
 So, I guess it would have been good to hear about that about a year ago:
 
   http://www.postgresql.org/message-id/13481.1354743...@sss.pgh.pa.us
   http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us
 
 We could have CREATE TEMPLATE FOR EXTENSION store the scripts into some
 files in PGDATA instead of the catalogs, but really I don't see the
 point.

Yeah, I don't particularly like that idea either, but especially if it's
going to be per-database again.  I can kinda, sorta see the point if
this was done cluster-wide but you don't like that idea and I'm not a
big fan of pushing these files out onto the filesystem anyway.

What I don't entirely follow is the argument against having
non-file-backed extensions be dump'd through pg_dump/restore.  Even in
that thread, Tom appears to agree that they'd have to be dumped out in
some fashion, even if they're stored as files under PGDATA, because
otherwise you're not going to be able to restore the DB..

On the other hand, I can appreciate the concern that we don't really
want a dump/restore to include the extension definition when it's
already on the filesystem.  That said, it amazes me that we don't
include the version # of the extension in pg_dump's 'CREATE EXTENSION'
command..  How is that not a problem?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  Fine- so we need a step that goes from 'source' to 'built'.  I don't see
  that step being done in or by a PG backend process.  Adding a new option
  which can take a pgxn source and build a script from it which can be run
  against PG via libpq is what I'd be going for- but that script *just
  installs (or perhaps upgrades) the extension.*  There's no need for that
  script, or various upgrade/downgrade/whatever scripts, to be sucked
  wholesale into the PG catalog.
 
 As you said previously, we can't ask extension authors to control what
 version of their extension is installed on which database, so we need a
 way to cooperate with the backend in order to know how to operate the
 update.

Sure, that sounds reasonable..

 We can't just pull data out of the backend to do that, not until we've
 been pushing the list of available versions and update scripts that we
 have to be able to run the update.

I'm not following this, nor why we need this master list of every
extension which exists in the world to be in every PG catalog in every
database out there.

 That's were I though about pushing the whole thing down to the catalogs
 and have the backend take control from there.

I can appreciate the desire to do that but this particular piece really
feels like it could be done better external to the backend.  To go back
to my OS example, I feel Debian is better off with apt-get/aptitude
being independent from dpkg itself.

 What's your alternative? Goals are:
 
   - using the update abilities of the extension mechanism
   - no access to the server's file system needed
   - pg_restore does the right thing
 
 I went for the whole set of extension abilities in my patch, you're
 pushing hard for me to reduce that goal so I only included the ability
 to manage version upgrades here.

I'd like to see these goals met, I just don't see it being all done in C
in the PG backend.  I've tried to outline my thoughts about how we
should keep the actual extension creation scripts, upgrade scripts, etc,
out of the backend catalogs (and not on the filesystem either..) and let
those be managed externally, but that does then require that when we
actually dump the extension's objects instead of just 
'CREATE EXTENSION blah;'.  I understand there have been objections
raised to that, but I wonder if that isn't mainly because we're calling
these new things extensions which have this built-in notion that
they're coming from an OS packaging system which installs files
somewhere..  We certainly have none of these qualms about dumping and
restoring all the objects in a given schema.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 On the other hand, I can appreciate the concern that we don't really
 want a dump/restore to include the extension definition when it's
 already on the filesystem.  That said, it amazes me that we don't
 include the version # of the extension in pg_dump's 'CREATE EXTENSION'
 command..  How is that not a problem?

Including the version number would be a problem.

When you install PostgreSQL 9.1, you only have hstore 1.0.
When you install PostgreSQL 9.2, you only have hstore 1.1.
When you install PostgreSQL 9.3, you only have hstore 1.2.

  
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/hstore/hstore.control;hb=refs/heads/REL9_1_STABLE
  
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/hstore/hstore.control;hb=refs/heads/REL9_2_STABLE
  
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/hstore/hstore.control;hb=refs/heads/REL9_3_STABLE

We should maybe add the extension's version number in our documentation
pages, such as the following:

  http://www.postgresql.org/docs/9.3/interactive/hstore.html

So when you pg_dump | pg_restore from 9.1 into 9.3, if pg_dump were to
be nitpicky about the version of hstore with the command

  CREATE EXTENSION hstore VERSION '1.0';

What would happen is that pg_restore would fail.

That's just the way we maintain contribs.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Tom Lane t...@sss.pgh.pa.us writes:
  Yes, exactly.  What's more, you're going to face huge push-back from
  vendors who are concerned about security (which is most of them).
 
 Last time I talked with vendors, they were working in the Open Shift
 team at Red Hat, and they actually asked me to offer them the ability
 you're refusing, to let them enable a better security model.
 
 The way they use cgroups and SELinux means that they want to be able to
 load shared binaries from system user places.

As I've pointed out before, I'd really like to hear exactly how these
individuals are using SELinux and why they feel this is an acceptable
approach.  The only use-case that this model fits is where you don't
have *any* access control in the database itself and everyone might as
well be a superuser.  Then, sure, SELinux can prevent your personal PG
environment from destroying the others on the system in much the same
way that a chroot can help there, but most folks who are looking at MAC
would view *any* database as an independent object system which needs to
*hook into* an SELinux or similar.

In other words, I really don't think we should be encouraging this
approach and certainly not without more understanding of what they're
doing here.  Perhaps they have a use-case for it, but it might be better
done through 'adminpack' or something similar than what we support in
core.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 So when you pg_dump | pg_restore from 9.1 into 9.3, if pg_dump were to
 be nitpicky about the version of hstore with the command
 
   CREATE EXTENSION hstore VERSION '1.0';
 
 What would happen is that pg_restore would fail.
 
 That's just the way we maintain contribs.

I'd much rather get an error that says that version of the extension is
unavailable than a runtime error when my plpgsql code tries to use a
function whose definition changed between 1.0 and 1.1..

Perhaps we're not ready to go there because of how contrib is built and
shipped, and I can understand that, but that doesn't make it a good
solution.  I'm not sure that such an issue should preclude us from
including in-catalog-only extensions from being dump'd out as a set of
objects (ala a schema) and then restored that way (preserving the
version of the extension it was installed at..).

I don't like the idea of having a pg_dump/restore mechanism that
intentionally tries to go out and install the latest version of whatever
extension was installed in the old DB by downloading it from PGXN,
building it, and then installing it...  Is that what people are
expecting here?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [PATCH] avoid buffer underflow in errfinish()

2013-12-02 Thread Robert Haas
On Sat, Nov 30, 2013 at 2:00 PM, Bruce Momjian br...@momjian.us wrote:
 On Wed, Mar 27, 2013 at 08:45:51AM -0400, Robert Haas wrote:
 On Sat, Mar 23, 2013 at 6:38 PM, Xi Wang xi.w...@gmail.com wrote:
  CHECK_STACK_DEPTH checks if errordata_stack_depth is negative.
  Move the dereference of errordata[errordata_stack_depth] after
  the check to avoid out-of-bounds read.

 This seems sensible and I'm inclined to commit it.  It's unlikely to
 matter very much in practice, since the only point of checking the
 stack depth in the first place is to catch a seemingly-unlikely coding
 error; and it's unlikely that referencing beyond the stack bounds
 would do anything too horrible, either.  But we may as well do it
 right.

 Was this ever dealt with?

No, it fell through the cracks.  I have just committed it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unused code in float8_to_char , formatting.c ?

2013-12-02 Thread Robert Haas
On Sat, Nov 30, 2013 at 9:01 PM, Bruce Momjian br...@momjian.us wrote:
 On Sun, Apr  7, 2013 at 12:14:29AM -0400, Robert Haas wrote:
 On Thu, Apr 4, 2013 at 6:47 PM, Greg Jaskiewicz gryz...@me.com wrote:
  Looking around the code Today, one of my helpful tools detected this dead 
  code.
  As far as I can see, it is actually unused call to strlen() in 
  formatting.c, float8_to_char().

 I poked at this a little and suggest the following somewhat more
 extensive cleanup.

 It seems to me that there are a bunch of these functions where len is
 unconditionally initialized in NUM_TOCHAR_prepare and then used there.
  Similarly in NUM_TOCHAR_cleanup.  And then there's a chunk of each
 individual function that does it a third time.  Rather than use the
 same variable in all three places, I've moved the variable
 declarations to the innermost possible scope.  Doing that revealed a
 bunch of other, similar places where we can get rid of strlen() calls.

 Does this version seem like a good idea?

 Robert, were you going to apply this patch from April?

Done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 I don't like the idea of having a pg_dump/restore mechanism that
 intentionally tries to go out and install the latest version of whatever
 extension was installed in the old DB by downloading it from PGXN,
 building it, and then installing it...  Is that what people are
 expecting here?

The whole idea of having Extension Templates in catalogs is exactly to
prevent what you're describing here from happening.

Whatever the templates you downloaded to get to the version you now have
in your database for extension “foo” are going to used again by
pg_restore at CREATE EXTENSION time. The extension depending on its
in-catalog templates ensures that model of operations.

You can copy/paste some extension examples from the regression tests and
pg_dump -Fc | pg_restore -l to see the details, or something.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-12-02 Thread Robert Haas
On Mon, Dec 2, 2013 at 12:54 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Thu, Nov 28, 2013 at 9:42 PM, Sawada Masahiko sawada.m...@gmail.com 
 wrote:
 I attached new version patch which have modify typos and added
 documentation patch.
 The documentation part of patch is implemented by Samrat Revagade.
 Thanks for the new version. The documentation still has some typo:
 - is ,literaloff/ = is literaloff/

 I have been pondering about this feature over the weekend and I have
 to admit that the approach using a GUC that can be modified after
 server initialization is not suited. What we want with this feature is
 to be able to include hint bits in WAL to perform WAL differential
 operations which is in some way what for example pg_rewing is doing by
 analyzing the relation blocks modified since the WAL fork point of a
 master with one of its promoted slave. But if this parameter can be
 modified by user at will, a given server could finish with a set of
 WAL files having inconsistent hint bit data (some files might have the
 hint bits, others not), which could create corrupted data when they
 are replayed in recovery.

Yep, that's a problem.

 Considering that, it would make more sense to have this option
 settable with initdb only and not changeable after initialization, in
 the same fashion as checksums. Having a GUC that can be used to check
 if this option is set or not using a SQL command could be an
 additional patch on top of the core feature.

 This does not mean of course that this patch has to be completely
 reworked as the core part of the patch, the documentation and the
 control file part would remain more or less the same.

Forcing it to be done only an initdb-time is excessive.  I think you
can just make it PGC_POSTMASTER and have it participate in the
XLOG_PARAMETER_CHANGE mechanism.  pg_rewind can check that it's set in
the control file before agreeing to rewind.  As long as it was set at
the time the master last entered read-write mode (which is what the
XLOG_PARAMETER_CHANGE stuff does) you should be fine, unless of course
I haven't had enough caffeine this morning, which is certainly
possible.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add full object name to the tag field

2013-12-02 Thread Robert Haas
On Mon, Dec 2, 2013 at 9:49 AM, Asit Mahato rigid.a...@gmail.com wrote:
 Hi all,

 I am a newbie. I am unable to understand the to do statement given below.

 Add full object name to the tag field. eg. for operators we need '=(integer,
 integer)', instead of just '='.

 please help me out with an example.

 Thanks and Regards,
 Asit Mahato

Cast the OID of the operator to regoperator instead of regoper.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Alvaro Herrera
Tom Lane escribió:

 When I've wanted to do that in the past, I've used pg_resetxlog to
 adjust a cluster's counters.  It still requires some manual hacking
 though because pg_resetxlog isn't bright enough to create the new
 pg_clog files needed when you move the xid counter a long way.
 We could fix that, or we could make the backend more forgiving of
 not finding the initial clog segment present at startup ...

FWIW we already have some new code that creates segments when not found.
It's currently used in multixact, and the submitted commit timestamp
module uses it too.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-12-02 Thread Bruce Momjian
On Sun, Dec  1, 2013 at 09:22:52AM +0100, Karsten Hilbert wrote:
 On Sat, Nov 30, 2013 at 03:21:08PM -0800, Kevin Grittner wrote:
 
   If your argument is that you want pg_upgrade to work even if the
   user already turned on default_transaction_read_only in the *new*
   cluster, I would humbly disagree with that goal, for pretty much
   the same reasons I didn't want pg_dump overriding it.
  
  If there were databases or users with default_transaction_read_only
  set in the old cluster, the pg_dumpall run will cause that property
  to be set in the new cluster, so what you are saying seems to be
  that a cluster can't be upgraded to a new major release if any
  database within it has that set.
 
 That is *precisely* my use case which I initially asked about.

The use-case would be that default_transaction_read_only is turned on in
postgresql.conf as part of installing the old and/or new cluster.  The
9.4 PGOPTIONS fix for pg_upgrade _will_ allow such a cluster to be
upgraded.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DATE type output does not follow datestyle parameter

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 10:22:47PM +0900, MauMau wrote:
 I'm sorry I didn't respond for a long time.  I've come up with a suggestion.
 
 The original reporter of this problem expected the output of the
 date type in 'Postgres,DMY' style to be 17 Dec 1997, when the
 output of the timestamp if Wed 17 Dec 07:37:16 1997 PST.  He
 thought so by reading the following sentence:
 
 http://www.postgresql.org/docs/current/static/datatype-datetime.html
 The output of the date and time types is of course only the date or
 time part in accordance with the given examples.
 
 However, the actual output of the date type in Postgres style is:
 
 * 12-17-1997 if datestyle is 'Postgres,YMD' or 'Postgres,MDY'
 * 17-12-1997 if datestyle is 'Postgres,DMY'
 
 So, my suggestion is to just add the following sentence right after
 the above one.
 
 The Postgres style is an exception: the output of the date type is
 either MM-DD- (e.g. 12-17-1997) or DD-MM- (e.g. 17-12-1997),
 which is different from the date part of the output of the timestamp
 type.
 
 Could you consider and add this to the manual?

Yes, I will make the change unless someone objects.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-02 Thread Robert Haas
On Mon, Dec 2, 2013 at 10:04 AM, Andrew Dunstan and...@dunslane.net wrote:
 The only way I have thought of as an alternative to this proposal is to use
 a partitioned table with different FK constraints for each child. That's
 certainly doable, but not without a deal of work, and even then you'd be
 giving up certain things, such as guaranteeing the uniqueness of the object
 key, at least without a lot more work.

 You can think of it this way: we currently enforce FK constraints except
 when the value being constrained is NULL (or part of it is NULL in the MATCH
 SIMPLE case). This is really a user-defined extension of the exception
 condition. I have at least one case where I could have used this feature and
 saved a significant amount of work. We wanted to apply FK constraints to a
 very large table, but grandfather in certain cases that didn't meet the
 constraint. That could have been done very simply using this feature.

I also like this feature.   It would be really neat if a FOREIGN KEY
constraint with a WHERE clause could use a *partial* index on the
foreign table provided that the index would be guaranteed to be predOK
for all versions of the foreign key checking query.  That might be
hard to implement, though.

Whether that works or not, it seems to me that a good deal of thought
will need to be given to what dependencies get created when creating a
constraint of this type.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-12-01 18:56:19 -0500, Tom Lane wrote:
 * is it useful to go into more detail than this about the data corruption
 bugs?  It's not clear to me that we can say more than vacuum and re-clone
 your standbys as far as recovery actions go, at least not within the
 couple of sentences that we traditionally allow for a release note item.

 I think it might be worth mentioning that (parts) of the data are
 potentially recoverable without too much effort in all of the bugs.

I thought about that but was afraid that it'd come off like a commercial
for data recovery services, which didn't seem appropriate.  People who
need this type of service can get advice on the mailing lists, anyway.

 * is there anything important to be said about the fourth and fifth bullet
 points (Fix bugs in setting the visibility-map bit for an empty page

 I doubt it's worth mentioning that one. Pretty much the only possible
 consequence is hitting an Assert() in assert enabled builds in a pretty
 rare scenario. There's no data corruption.

OK, removed.  We usually mention Assert-fixing commits but this update
certainly has enough other reasons to get installed :-(

 and Fix multiple bugs in update chain traversal)?

 These have quite some possible consequences though. Not sure how to
 describe it in few words, but it could lead to updating, locking or
 returning the wrong row (by following into a different ctid chain),
 unneccessary deadlocks (by unneccesarily waiting for an entire
 multixact's member, instead of just the updater), missed and superflous
 serialization failures in repeatable read and, slightly differently, in
 serializable.  All need concurrency to manifest.

I put in a little bit here.

 Not sure whether f5f92bdc44ffdf577244e0d055825cacd0cdea10,
 d9484ab5f3cbcfea64536fec333723f9aa4c0b2c shouldn't be mentioned
 separately, especially the first could cause autovacuum to crazily spawn
 children without ever actually doing anything useful.

Agreed, significant autovacuum activity is a separate symptom, so it
seems worth mentioning separately.

 I think Sergey's and Jeff's fix
 (4c697d8f4845823a8af67788b219ffa4516ad14c) deserves its own
 headline.

Yeah, I had lumped it with the wrong relfrozenxid accounting issue
but again the symptom is different.

  para
   Fix initialization of filenamepg_clog/ and
   filenamepg_subtrans/
   during hot standby startup (Andres Freund)
  /para
 I think Heikki spent a fair amount of time looking at the code, so it
 seems fair to also name him as well..

Done.

 Maybe we should mention that hot_standby=on is a prerequisite?

Well, it already says hot standby, but I repeated that term in the
body to emphasize it.

  para
   This avoids ever-increasing disk space consumption in hot standby
   mode.
  /para
 It's not really related to hot standby - anything that never comes out
 of crash recovery is affected. We sometime should come up with a
 coherent name that covers HS, SR, PITR, warm standbys et al...

OK, I said standby server instead, which should cover the most
interesting cases.

 Should the strerror() improvements be mentioned
 (e3480438e89f74019f271b1b5501bb9eed2e0d2a)?

I intentionally left that out because it seemed like a reasonable
explanation would take more space than was justified.

Thanks for looking at the notes!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Handling GIN incomplete splits

2013-12-02 Thread Jeff Janes
On Mon, Dec 2, 2013 at 1:26 AM, Heikki Linnakangas
hlinnakan...@vmware.comwrote:

 On 12/01/2013 10:40 PM, Jeff Janes wrote:

 On Wed, Nov 27, 2013 at 9:40 AM, Jeff Janes jeff.ja...@gmail.com wrote:

  The commit 04eee1fa9ee80dabf7 of this series causes a self-deadlock in
 the
 LWLock code during the operation below, with it trying to take
 an LW_EXCLUSIVE on a high, even-numbered lockid when it already holds the
 same lockid.

 CREATE INDEX planet_osm_ways_nodes ON planet_osm_ways USING gin (nodes)
   WITH (FASTUPDATE=OFF);

 It happens pretty reliably using osm2pgsql.

 I will try to come up with a simple reproducible demonstration, and stack
 trace, over the weekend.


 Whatever the problem, it seems to have been fixed in ce5326eed386959aa,
 More GIN refactoring.


 That's good, I guess :-). Thanks for the testing. Did you import the full
 planet.osm? I tried with a subset containing just Finland, but didn't see
 any problems.


I used Antarctica.  I don't have the RAM to process the full planet, or the
bandwidth to download it very easily.

Do you think it is worth chasing down where the problem was, to make sure
it was truly fixed rather than simply changed in a way that happens not to
trigger any more in this situation?

Cheers,

Jeff


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Bruce Momjian
On Sat, Nov 30, 2013 at 12:10:19PM -0500, Bruce Momjian wrote:
  Drat, you're quite right. I've always included the full certificate
  chain in client certs but it's in no way required.
  
  I guess that pretty much means maintaining the status quo and documenting
  it better.
 
 I have developed the attached patch to document this behavior.  My goals
 were:
 
 * clarify that a cert can match a remote intermediate or root certificate
 * clarify that the client cert must match a server root.crt
 * clarify that the server cert much match a client root.crt
 * clarify that the root certificate does not have to be specified
   in the client or server cert as long as the remote end has the chain
   to the root
 
 Does it meet these goals?  Is it correct?

I have updated the patch, attached, to be clearer about the requirement
that intermediate certificates need a chain to root certificates.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
new file mode 100644
index 955f248..443a48e
*** a/doc/src/sgml/libpq.sgml
--- b/doc/src/sgml/libpq.sgml
*** ldap://ldap.acme.com/cn=dbserver,cn=host
*** 7121,7127 
 To allow server certificate verification, the certificate(s) of one or more
 trusted acronymCA/s must be
 placed in the file filename~/.postgresql/root.crt/ in the user's home
!directory. (On Microsoft Windows the file is named
 filename%APPDATA%\postgresql\root.crt/filename.)
/para
  
--- 7121,7129 
 To allow server certificate verification, the certificate(s) of one or more
 trusted acronymCA/s must be
 placed in the file filename~/.postgresql/root.crt/ in the user's home
!directory. If intermediate acronymCA/s appear in
!filenameroot.crt/filename, the file must also contain certificate
!chains to their root acronymCA/s. (On Microsoft Windows the file is named
 filename%APPDATA%\postgresql\root.crt/filename.)
/para
  
*** ldap://ldap.acme.com/cn=dbserver,cn=host
*** 7179,7193 
 quoteintermediate/ certificate authority, rather than one that is
 directly trusted by the server.  To use such a certificate, append the
 certificate of the signing authority to the filenamepostgresql.crt/
!file, then its parent authority's certificate, and so on up to a
!quoteroot/ authority that is trusted by the server.  The root
!certificate should be included in every case where
!filenamepostgresql.crt/ contains more than one certificate.
/para
  
para
!Note that filenameroot.crt/filename lists the top-level CAs that are
!considered trusted for signing server certificates.  In principle it need
 not list the CA that signed the client's certificate, though in most cases
 that CA would also be trusted for server certificates.
/para
--- 7181,7195 
 quoteintermediate/ certificate authority, rather than one that is
 directly trusted by the server.  To use such a certificate, append the
 certificate of the signing authority to the filenamepostgresql.crt/
!file, then its parent authority's certificate, and so on up to a certificate
!authority, quoteroot/ or quoteintermediate/, that is trusted by
!the server, i.e. signed by a certificate in the server's
!filenameroot.crt/filename file.
/para
  
para
!Note that the client's filename~/.postgresql/root.crt/ lists the top-level CAs 
!that are considered trusted for signing server certificates.  In principle it need
 not list the CA that signed the client's certificate, though in most cases
 that CA would also be trusted for server certificates.
/para
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
new file mode 100644
index ab51782..4916837
*** a/doc/src/sgml/runtime.sgml
--- b/doc/src/sgml/runtime.sgml
*** pg_dumpall -p 5432 | psql -d postgres -p
*** 1986,1995 
 quoteintermediate/ certificate authority, rather than one that is
 directly trusted by clients.  To use such a certificate, append the
 certificate of the signing authority to the filenameserver.crt/ file,
!then its parent authority's certificate, and so on up to a quoteroot/
!authority that is trusted by the clients.  The root certificate should
!be included in every case where filenameserver.crt/ contains more than
!one certificate.
/para
  
sect2 id=ssl-client-certificates
--- 1986,1995 
 quoteintermediate/ certificate authority, rather than one that is
 directly trusted by clients.  To use such a certificate, append the
 certificate of the signing authority to the filenameserver.crt/ file,
!then its parent authority's certificate, and so on up to a certificate
!authority, quoteroot/ or quoteintermediate/, that is trusted by
!clients, i.e. signed by a 

Re: [HACKERS] Re: Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-12-02 Thread Bruce Momjian
On Fri, Nov 29, 2013 at 01:19:54PM -0500, Bruce Momjian wrote:
 On Fri, Nov 29, 2013 at 01:05:20PM -0500, Bruce Momjian wrote:
  On Fri, Nov 29, 2013 at 12:27:49AM -0500, Robert Haas wrote:
   On Thu, Nov 28, 2013 at 11:04 PM, Alvaro Herrera
   alvhe...@2ndquadrant.com wrote:
David Johnston wrote:
   
In all of these cases we are assuming that the user understands that
emitting a warning means that something is being logged to disk and 
thus is
causing a resource drain.
   
I like explicitly saying that issuing these commands is pointless/has 
no
effect; being indirect and saying that the only thing they do is emit 
a
warning omits any explicit explicit explanation of why.  And while I 
agree
that logging the warning is an effect; but it is not the primary/direct
effect that the user cares about.
   
Honestly I still prefer what I proposed initially, which AFAICS has all
the properties you deem desirable in the wording:
   
issuing ROLLBACK outside a transaction emits a warning and otherwise 
has no effect.
   
   Yeah, I still like otherwise has no effect or has no other effect
   best.  But I can live with Bruce's latest proposal, too.
  
  OK, great, I have gone with Alvaro's wording;  patch attached.
 
 Duh, missing patch.  Attached now.

Patch applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-12-02 Thread Karsten Hilbert
On Mon, Dec 02, 2013 at 11:41:10AM -0500, Bruce Momjian wrote:

   If there were databases or users with default_transaction_read_only
   set in the old cluster, the pg_dumpall run will cause that property
   to be set in the new cluster, so what you are saying seems to be
   that a cluster can't be upgraded to a new major release if any
   database within it has that set.
  
  That is *precisely* my use case which I initially asked about.
 
 The use-case would be that default_transaction_read_only is turned on in
 postgresql.conf

Are you telling me which use case I initially asked
about on this thread ?

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Andres Freund
On 2013-12-02 12:26:25 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  I think it might be worth mentioning that (parts) of the data are
  potentially recoverable without too much effort in all of the bugs.
 
 I thought about that but was afraid that it'd come off like a commercial
 for data recovery services, which didn't seem appropriate.  People who
 need this type of service can get advice on the mailing lists, anyway.

Fair argument. Especially when suggested by a person working for a
company providing such services. On that note I'd would take the
following with a grain of salt: I think it's a letting users a bit alone
with their problems. Maybe: Data might be recoverable.?

Thanks!

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 I have updated the patch, attached, to be clearer about the requirement
 that intermediate certificates need a chain to root certificates.

I see that you removed the sentence

   The root
   certificate should be included in every case where
   filenamepostgresql.crt/ contains more than one certificate.

in both places where it appeared.  I seem to remember that I'd put that
in on the basis of experimentation, ie it didn't work to provide just
a partial chain.  You appear to be telling people that it's safe to
omit the root cert, and I think this is wrong.

Specifically, rather than the text trusted by the server, i.e. signed by
a certificate in the server's filenameroot.crt/filename file, I think
you need to say trusted by the server, i.e., appears in the server's
filenameroot.crt/filename file.  Have you experimented with the
configuration you're proposing, and if so, with which OpenSSL versions?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread David E. Wheeler
On Dec 2, 2013, at 6:14 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:

 Whether you're targetting a file system template or a catalog template,
 PGXN is not a complete solution, you still need to build the extension.

This is true today, but only because PostgreSQL provides the infrastructure for 
building and installing extensions that entails `make  make install`. If 
Postgres provided some other method of building and installing extensions, you 
could start using it right away on PGXN. The *only* requirement for PGXN 
distributions, really, is a META.json file describing the extension.

Best,

David

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 06:57:53PM +0100, Karsten Hilbert wrote:
 On Mon, Dec 02, 2013 at 11:41:10AM -0500, Bruce Momjian wrote:
 
If there were databases or users with default_transaction_read_only
set in the old cluster, the pg_dumpall run will cause that property
to be set in the new cluster, so what you are saying seems to be
that a cluster can't be upgraded to a new major release if any
database within it has that set.
   
   That is *precisely* my use case which I initially asked about.
  
  The use-case would be that default_transaction_read_only is turned on in
  postgresql.conf
 
 Are you telling me which use case I initially asked
 about on this thread ?

No, this is another use-case that is fixed the pg_upgrade patch.  The
ALTER DATABASE SET is also fixed by this patch.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] In-core regression tests for replication, cascading, archiving, PITR, etc.

2013-12-02 Thread Andres Freund
On 2013-12-02 09:59:12 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  I think we also needs support for testing xid/multixid wraparound. It
  currently isn't realistically testable because of the timeframes
  involved.
 
 When I've wanted to do that in the past, I've used pg_resetxlog to
 adjust a cluster's counters.

I've done that as well, but it's painful and not neccessarily testing
the right thing. E.g. I am far from sure we handle setting the
anti-wraparound limits correctly when promoting a standby - a restart to
adapt pg_control changes things and it might get rolled back because of
a already logged checkpoints.

What I'd love is a function that gives me the opportunity to
*efficiently* move forward pg_clog, pg_multixact/offset,members by large
chunks. So e.g. I could run a normal pgbench alongside another pgbench
moving clog forward in 500k chunks, but so it creates the necessary
files I could possibly need to access.

If you do it naivly you get into quite some fun with hot standby btw. I
can tell you that from experience :P

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Robert Haas
On Mon, Dec 2, 2013 at 10:13 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Heikki Linnakangas hlinnakan...@vmware.com writes:
 I fear we're wandering off the point again. So let me repeat: It must be
 possible to install the same extension the way you do today, and using the
 new mechanism.

 The way you do today is running make install or apt-get install or
 something else to write files in the right place on the file system,
 usually with root privileges.

 The new mechanism tries to avoid using the file system *completely*.

 Sorry. I don't understand what you mean other that “I don't want this
 patch because I don't understand what it is about”.

OK, I'll bite.  I've been trying to stay out of this thread, but I
really *don't* understand what this patch is about.  Extensions, as
they exist today, are installed from the filesystem and their contents
are not dumped.  You're trying to create a new kind of extension which
is installed from the system catalogs (instead of the file system) and
is dumped.  Why should anyone want that?

It seems that part of the answer is that people would like to be able
to install extensions via libpq.  You could almost write a client-side
tool for that today just by using adminpack to write the files to the
server, but you'd trip over the fact that files written by adminpack
must be in either the data directory or the log directory.  But we
could fix that easily enough.  Here's a design sketch: (1) Add a new
GUC that specifies an alternate location from which extensions can be
installed.  (2) Allow adminpack to write to that location just as it
writes to the existing locations.  (3) Write a tool that copies files
from wherever to the appropriate server directory using adminpack.
Problem solved!  The only downside is that the use of this facility
would have to be restricted to superusers, but the current consensus
on this thread is that we should restrict *this* facility to
superusers also, so we're not really losing anything.  And, for a
further plus, it'd even work for extensions that contain shared
libraries.  Win.

Now, if we could make this mechanism work for non-superusers, then I
think it gets more interesting, because now you have a more
significant gain in functionality: someone can potentially download an
extension from PGXN and install it just for themselves without needing
superuser access, provided the extension doesn't require a .so or any
permissions that they don't have.  That's kind of nice, but as Stephen
said elsewhere on the thread, this seems like a lot of mechanism for
that narrow goal.  As you (I think) said somewhere on this thread, you
could just create the extension with a bunch of CREATE and ALTER
EXTENSION .. ADD statements and set a flag on it that causes it to be
dumped the same way.  (We might need to add a CREATE EXTENSION foo
WITH NO CONTENTS statement to really make it work, so that the libpq
connection can create it as completely empty and then add objects to
it one at a time, but we shouldn't need too much more than that.)  The
whole idea of the extension template as such goes away.

So I'm having a hard time understanding what this patch actually gains
us that can't be done more simply by some other means.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-12-02 Thread Karsten Hilbert
On Mon, Dec 02, 2013 at 01:24:18PM -0500, Bruce Momjian wrote:

 If there were databases or users with default_transaction_read_only
 set in the old cluster, the pg_dumpall run will cause that property
 to be set in the new cluster, so what you are saying seems to be
 that a cluster can't be upgraded to a new major release if any
 database within it has that set.

That is *precisely* my use case which I initially asked about.
   
   The use-case would be that default_transaction_read_only is turned on in
   postgresql.conf
  
  Are you telling me which use case I initially asked
  about on this thread ?
 
 No, this is another use-case that is fixed the pg_upgrade patch.  The
 ALTER DATABASE SET is also fixed by this patch.

I see.

Thanks,
Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Jeff Davis
On Sun, 2013-12-01 at 22:34 -0500, Stephen Frost wrote:
 Perhaps I'm missing something- but we already *have* a catalog
 representation for every extension that's ever installed into a given
 database.  A representation that's a heck of a lot better than a big
 text blob.

I meant extension template represented in the catalog.

  But bringing more of an extension into the catalog can be done, and I
  think we'll see big benefits from that.
 
 I'm not following here- what's 'missing'?

It seems that you are making the assumption that installing an extension
template or creating an extension are major operations, and anyone
encountering an error is a superuser with admin access to the server and
can easily correct it.

If the admin messes up and the extension template isn't there (e.g.
after a failover), the person to encounter the error at CREATE EXTENSION
time might not have admin access or there might be a process required to
deploy the new files. But if the extension templates were carried along
with replication and backup naturally, then they'd be there.

And it would be nice if there was some hope for non-superusers to create
extension templates, but that will never happen as long as they are
files.

  That being said, there some things about in-catalog templates that need
  some more thought:
  
1. If someone does want their OS to install extensions for them (e.g.
  the contrib package), how should that be done? This usually works fine
  with the aforementioned languages, because installation is still just
  dropping files in the right place. Postgres is different, because to put
  something in the catalog, we need a running server, which is awkward for
  a packaging system to do.
 
 You need a running PG for the *extension* to be installed, but with the
 filesystem-based extension approach we have today, the template (which
 are the files on the filesystem) don't need PG running

I think you misread -- this is a list of issues if we move templates
into the catalog. File-based templates obviously don't have this
problem.

3. What do we do about native shared libraries? Ultimately, I imagine
  that we should handle these similarly to tablespaces: have a real
  database object with an OID that extensions or functions can depend on,
  and create a symlink (with the OID as the link name) that points to the
  real file on disk. We could also export some new symbols like the shared
  library name and version for better error checking.
 
 I'm sorry, but I do not see shared libraries working through this
 system, at all.  I know that goes against what Dimitri and some others
 want, but I've talked with a few folks (such as Paul Ramsey of PostGIS)
 about this notion and, from that perspective, it's almost laughable to
 think we could ship shared libraries in this way.  Even if we could
 convince ourselves that there's some way for us to track the files on
 the filesystem and work out all the per-database and whatever issues are
 associated with that, it'd only work for the simplest shared libraries
 which don't have any dependencies on other libraries on the system
 (excepting, perhaps, libc6) and that narrows the use-case down
 significantly, to the point where I don't feel it's worth all that
 effort.

I was just suggesting that a little more information in the catalog
could improve dependency tracking and error handling. I'm not suggesting
we ship any shared libraries anywhere -- that's still up to extension
authors and PGXN. I'm also not suggesting that the error handling will
be perfect or catch subtle mismatches.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Mika Eloranta
On 02 Dec 2013, at 01:56, Tom Lane t...@sss.pgh.pa.us wrote:
 
 * any other items that need to be corrected or expanded?

I think 2103430 (Fix parsing of xlog file name in pg_receivexlog) is worth 
mentioning, 
as all past pg_receivexlog 9.3.x versions fail to resume interrupted streaming 
after ~4 GiB
of xlogs have been generated.

Cheers,

-- 
Mika Eloranta
Ohmu Ltd.  http://www.ohmu.fi/



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Josh Berkus
Tom,

The issue can be ameliorated by, after upgrading, vacuuming all tables
in all databases while having vacuum_freeze_table_age set to zero. 

Why not say:

This issue can be ameliorated by, after upgrading, running a
database-wide VACUUM FREEZE.

Or is there a difference in this case?  If so, what?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Andres Freund
On 2013-12-02 10:51:28 -0800, Josh Berkus wrote:
 Tom,
 
 The issue can be ameliorated by, after upgrading, vacuuming all tables
 in all databases while having vacuum_freeze_table_age set to zero. 
 
 Why not say:
 
 This issue can be ameliorated by, after upgrading, running a
 database-wide VACUUM FREEZE.
 
 Or is there a difference in this case?  If so, what?

vacuum_freeze_table age causes a full table scan, but doesn't freeze
rows younger than vacuum_freeze_min_age. I.e. it finishes much faster.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for 9.3.2

2013-12-02 Thread Tom Lane
Mika Eloranta m...@ohmu.fi writes:
 On 02 Dec 2013, at 01:56, Tom Lane t...@sss.pgh.pa.us wrote:
 * any other items that need to be corrected or expanded?

 I think 2103430 (Fix parsing of xlog file name in pg_receivexlog) is worth 
 mentioning, 
 as all past pg_receivexlog 9.3.x versions fail to resume interrupted 
 streaming after ~4 GiB
 of xlogs have been generated.

OK, I'd misunderstood the severity of that problem.  Item added, thanks!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Greg Stark
On Mon, Dec 2, 2013 at 6:30 PM, Robert Haas robertmh...@gmail.com wrote:
 OK, I'll bite.  I've been trying to stay out of this thread, but I
 really *don't* understand what this patch is about.  Extensions, as
 they exist today, are installed from the filesystem and their contents
 are not dumped.  You're trying to create a new kind of extension which
 is installed from the system catalogs (instead of the file system) and
 is dumped.  Why should anyone want that?

 It seems that part of the answer is that people would like to be able
 to install extensions via libpq.  You could almost write a client-side
 tool for that today just by using adminpack to write the files to the
 server, but you'd trip over the fact that files written by adminpack
 must be in either the data directory or the log directory.  But we
 could fix that easily enough.

Just tossing an idea out there. What if you could install an extension
by specifying not a local file name but a URL. Obviously there's a
security issue but for example we could allow only https URLs with
verified domain names that are in a list of approved domain names
specified by a GUC.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-12-02 Thread Alvaro Herrera
Dean Rasheed escribió:

 +/*
 + * If a schema was explicitly specified, test if it exists.  If it does not,
 + * report the schema as missing rather than the child object.
 + */
 +static bool
 +schema_does_not_exist_skipping(List *objname,
 +const char **msg,
 +char **name)
 +{
 + RangeVar*rel;
 +
 + rel = makeRangeVarFromNameList(objname);
 +
 + if (rel-schemaname != NULL 
 + !OidIsValid(LookupNamespaceNoError(rel-schemaname)))
 + {
 + *msg = gettext_noop(schema \%s\ does not exist, skipping);
 + *name = rel-schemaname;
 +
 + return true;
 + }
 +
 + return false;
 +}

In success cases, are we leaking a lot of memory?  In the error case I
guess it doesn't matter that the RangeVar is getting leaked (we're
aborting anyway), but if we're called and everything turns out to work,
are things cleaned up timely?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 12:59:41PM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  I have updated the patch, attached, to be clearer about the requirement
  that intermediate certificates need a chain to root certificates.
 
 I see that you removed the sentence
 
The root
certificate should be included in every case where
filenamepostgresql.crt/ contains more than one certificate.
 
 in both places where it appeared.  I seem to remember that I'd put that
 in on the basis of experimentation, ie it didn't work to provide just
 a partial chain.  You appear to be telling people that it's safe to
 omit the root cert, and I think this is wrong.
 
 Specifically, rather than the text trusted by the server, i.e. signed by
 a certificate in the server's filenameroot.crt/filename file, I think
 you need to say trusted by the server, i.e., appears in the server's
 filenameroot.crt/filename file.  Have you experimented with the
 configuration you're proposing, and if so, with which OpenSSL versions?

I am basing the text on the tests done in this thread, though I can test
it myself too (though I have not yet).  This email indicates we only
need the client cert in the client, not the chain to root:

http://www.postgresql.org/message-id/5146a103.8080...@2ndquadrant.com

OK, we're good now, the server is sending us the intermediate cert we
require. Regular non-client-cert verified SSL is fine.  Examination of
the protocol chat shows that the server is sending a Server Hello with a
Certificate message containing the server and intermdediate certificate
DNs:

It can get the root and intermediate from the server, hence the signed
by rather than appears wording.  This text indicates also that the
client doesn't have to have the certificate chain to the root:

http://www.postgresql.org/message-id/514a9ddf.3050...@2ndquadrant.com
Drat, you're quite right. I've always included the full certificate
chain in client certs but it's in no way required.

I don't fully understand the issues but the discussion seens to indicate
this.  Am I missing something?  Should I run some tests?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Robert Haas
On Mon, Dec 2, 2013 at 2:33 PM, Greg Stark st...@mit.edu wrote:
 On Mon, Dec 2, 2013 at 6:30 PM, Robert Haas robertmh...@gmail.com wrote:
 OK, I'll bite.  I've been trying to stay out of this thread, but I
 really *don't* understand what this patch is about.  Extensions, as
 they exist today, are installed from the filesystem and their contents
 are not dumped.  You're trying to create a new kind of extension which
 is installed from the system catalogs (instead of the file system) and
 is dumped.  Why should anyone want that?

 It seems that part of the answer is that people would like to be able
 to install extensions via libpq.  You could almost write a client-side
 tool for that today just by using adminpack to write the files to the
 server, but you'd trip over the fact that files written by adminpack
 must be in either the data directory or the log directory.  But we
 could fix that easily enough.

 Just tossing an idea out there. What if you could install an extension
 by specifying not a local file name but a URL. Obviously there's a
 security issue but for example we could allow only https URLs with
 verified domain names that are in a list of approved domain names
 specified by a GUC.

That's a different feature, but I don't see anything preventing
someone from implementing that as an extension, today, without any
core support at all.  It would only be usable in cases where the share
directory is writable by the database server (i.e. low-security
installations) and you'd have to make it a function call rather than
piggybacking on CREATE EXTENSION, but neither of those things sound
bad to me.  (And if they are bad, they could be addressed by providing
hooks or event triggers, leaving the rest of the functionality in the
extension module.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Greg Stark st...@mit.edu writes:
 On Mon, Dec 2, 2013 at 6:30 PM, Robert Haas robertmh...@gmail.com wrote:
 OK, I'll bite.  I've been trying to stay out of this thread, but I
 really *don't* understand what this patch is about.  Extensions, as

Thanks!

 they exist today, are installed from the filesystem and their contents
 are not dumped.  You're trying to create a new kind of extension which
 is installed from the system catalogs (instead of the file system) and
 is dumped.  Why should anyone want that?

To benefit from ALTER EXTENSION … UPDATE … and \dx.

And technically the extension is not dumped, its templates are.

 It seems that part of the answer is that people would like to be able
 to install extensions via libpq.  You could almost write a client-side
 tool for that today just by using adminpack to write the files to the
 server, but you'd trip over the fact that files written by adminpack
 must be in either the data directory or the log directory.  But we
 could fix that easily enough.

Trick question: when you've implemented said client and used it for a
couple of (in-house) extensions, what do you think should happen at
pg_restore time?

Hint: in a properly designed ops model, pg_restore happens each and
  every day when the unattended cron job “rebases” the QA or testing
  environments from the production PITR backups, of course.

 Just tossing an idea out there. What if you could install an extension
 by specifying not a local file name but a URL. Obviously there's a
 security issue but for example we could allow only https URLs with
 verified domain names that are in a list of approved domain names
 specified by a GUC.

That's something I want to build. This time, not in core.

The model I've been thinking about involves an EVENT TRIGGER that is
fired at ddl_command_start for CREATE EXTENSION and prepares an
EXTENSION TEMPLATE before the command has a chance to check what's
available and install the current default version of it.

Also runs at ALTER EXTENSION … UPDATE …, of course, providing the
upgrade scripts on the fly.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Andrew Dunstan


On 12/02/2013 02:45 PM, Bruce Momjian wrote:

On Mon, Dec  2, 2013 at 12:59:41PM -0500, Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:

I have updated the patch, attached, to be clearer about the requirement
that intermediate certificates need a chain to root certificates.

I see that you removed the sentence

The root
certificate should be included in every case where
filenamepostgresql.crt/ contains more than one certificate.

in both places where it appeared.  I seem to remember that I'd put that
in on the basis of experimentation, ie it didn't work to provide just
a partial chain.  You appear to be telling people that it's safe to
omit the root cert, and I think this is wrong.

Specifically, rather than the text trusted by the server, i.e. signed by
a certificate in the server's filenameroot.crt/filename file, I think
you need to say trusted by the server, i.e., appears in the server's
filenameroot.crt/filename file.  Have you experimented with the
configuration you're proposing, and if so, with which OpenSSL versions?

I am basing the text on the tests done in this thread, though I can test
it myself too (though I have not yet).  This email indicates we only
need the client cert in the client, not the chain to root:

http://www.postgresql.org/message-id/5146a103.8080...@2ndquadrant.com

OK, we're good now, the server is sending us the intermediate cert we
require. Regular non-client-cert verified SSL is fine.  Examination of
the protocol chat shows that the server is sending a Server Hello with a
Certificate message containing the server and intermdediate certificate
DNs:

It can get the root and intermediate from the server, hence the signed
by rather than appears wording.  This text indicates also that the
client doesn't have to have the certificate chain to the root:

http://www.postgresql.org/message-id/514a9ddf.3050...@2ndquadrant.com
Drat, you're quite right. I've always included the full certificate
chain in client certs but it's in no way required.

I don't fully understand the issues but the discussion seens to indicate
this.  Am I missing something?  Should I run some tests?



AIUI, you need a complete chain from one end to the other. So the cert 
being checked can include the intermediate cert in what it sends, or it 
can be in the root.crt at the other end, but one way or another, the 
checking end needs a complete chain from a root cert to the cert from 
the other end.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Mon, Dec  2, 2013 at 12:59:41PM -0500, Tom Lane wrote:
 I see that you removed the sentence
 The root
 certificate should be included in every case where
 filenamepostgresql.crt/ contains more than one certificate.

 I don't fully understand the issues but the discussion seens to indicate
 this.  Am I missing something?  Should I run some tests?

My recollection is that if the client cert file includes *only* the
client's own cert, the server will puzzle out how that connects to the
certs it has.  However, if the client cert file contains more than one
cert (ie, client's cert and some intermediate-CA cert), the server
will *not* try to associate the intermediate cert with some root cert it
has.  It wants the chain the client sends to terminate in a cert that it
has listed directly in root.crt.

It's possible that my recollection is faulty, or that this behavior was
a bug that's been fixed in more recent OpenSSL versions.  If it's the
latter, though, I hesitate to tell people they can rely on the corrected
behavior.  The text in question is from May 2010, and I would've been
testing on whatever OpenSSL version was then current in Fedora, so it
would hardly be a version that's disappeared from the wild.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  I don't like the idea of having a pg_dump/restore mechanism that
  intentionally tries to go out and install the latest version of whatever
  extension was installed in the old DB by downloading it from PGXN,
  building it, and then installing it...  Is that what people are
  expecting here?
 
 The whole idea of having Extension Templates in catalogs is exactly to
 prevent what you're describing here from happening.

eh...?  I'm not sure how that follows.

 Whatever the templates you downloaded to get to the version you now have
 in your database for extension “foo” are going to used again by
 pg_restore at CREATE EXTENSION time. The extension depending on its
 in-catalog templates ensures that model of operations.

What I've been trying to point out is that there's absolutely zero need
for the 'extension template' part of this to make a pg_restore work for
an entirely-in-the-catalog extension.  I realize that's how you've done
it with this patch set but that doesn't make it necessary.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
David,

* David E. Wheeler (da...@justatheory.com) wrote:
 On Dec 2, 2013, at 6:14 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 
  Whether you're targetting a file system template or a catalog template,
  PGXN is not a complete solution, you still need to build the extension.
 
 This is true today, but only because PostgreSQL provides the infrastructure 
 for building and installing extensions that entails `make  make install`. 
 If Postgres provided some other method of building and installing extensions, 
 you could start using it right away on PGXN. The *only* requirement for PGXN 
 distributions, really, is a META.json file describing the extension.

Thanks, that's a pretty interesting point..  I like the idea that we
could provide a new make target which could build an 'inline extension'
(or what-have-you) which could then be distributed and used by users
either directly or with some client-side tool.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Ian Pilcher
On 12/02/2013 02:01 PM, Andrew Dunstan wrote:
 AIUI, you need a complete chain from one end to the other. So the cert
 being checked can include the intermediate cert in what it sends, or it
 can be in the root.crt at the other end, but one way or another, the
 checking end needs a complete chain from a root cert to the cert from
 the other end.

Yes.  And the problem is that there is no way to prevent OpenSSL from
accepting intermediate certificates supplied by the client.  As a
result, the server cannot accept client certificates signed by one
intermediate CA without also accepting *any* client certificate that can
present a chain back to the root CA.

Frankly, this whole conversation reinforces my belief that this behavior
is so counter-intuitive that it really should be changed.

GnuTLS for the win?

-- 

Ian Pilcher arequip...@gmail.com
   Sent from the cloud -- where it's already tomorrow



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-12-02 Thread Dean Rasheed
On 2 December 2013 19:37, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Dean Rasheed escribió:

 +/*
 + * If a schema was explicitly specified, test if it exists.  If it does not,
 + * report the schema as missing rather than the child object.
 + */
 +static bool
 +schema_does_not_exist_skipping(List *objname,
 +const char **msg,
 +char **name)
 +{
 + RangeVar*rel;
 +
 + rel = makeRangeVarFromNameList(objname);
 +
 + if (rel-schemaname != NULL 
 + !OidIsValid(LookupNamespaceNoError(rel-schemaname)))
 + {
 + *msg = gettext_noop(schema \%s\ does not exist, skipping);
 + *name = rel-schemaname;
 +
 + return true;
 + }
 +
 + return false;
 +}

 In success cases, are we leaking a lot of memory?  In the error case I
 guess it doesn't matter that the RangeVar is getting leaked (we're
 aborting anyway), but if we're called and everything turns out to work,
 are things cleaned up timely?


I think that memory gets freed at the end of the DROP command, so I
don't think this is a concern. In any case, that RangeVar is only of
order 50 bytes. If we were concerned about memory leakage here, a
bigger concern would be the calling code in does_not_exist_skipping(),
which is using NameListToString() which allocates at least 1024 bytes
for the name of the non-existent object without freeing it.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 * David E. Wheeler (da...@justatheory.com) wrote:
 This is true today, but only because PostgreSQL provides the
 infrastructure for building and installing extensions that entails `make
  make install`. If Postgres provided some other method of building and
 installing extensions, you could start using it right away on PGXN. The
 *only* requirement for PGXN distributions, really, is a META.json file
 describing the extension.

 Thanks, that's a pretty interesting point..  I like the idea that we
 could provide a new make target which could build an 'inline extension'
 (or what-have-you) which could then be distributed and used by users
 either directly or with some client-side tool.

+1

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Tom Lane
Ian Pilcher arequip...@gmail.com writes:
 Yes.  And the problem is that there is no way to prevent OpenSSL from
 accepting intermediate certificates supplied by the client.  As a
 result, the server cannot accept client certificates signed by one
 intermediate CA without also accepting *any* client certificate that can
 present a chain back to the root CA.

Isn't that sort of the point?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
 Problem solved!  The only downside is that the use of this facility
 would have to be restricted to superusers, but the current consensus
 on this thread is that we should restrict *this* facility to
 superusers also, so we're not really losing anything.  And, for a
 further plus, it'd even work for extensions that contain shared
 libraries.  Win.

This is more-or-less what I've been argueing for when it comes to
addressing this for people who are superusers and want to install
.so's via libpq- do it outside of PG core, thank you very much.

 Now, if we could make this mechanism work for non-superusers, then I
 think it gets more interesting, because now you have a more
 significant gain in functionality: someone can potentially download an
 extension from PGXN and install it just for themselves without needing
 superuser access, provided the extension doesn't require a .so or any
 permissions that they don't have.  

This is exactly what I want to see happen, with perhaps some external
tool which handles the download from PGXN and install it part- or
maybe we add that to psql, but I feel like a dedicated tool would be
better.

 That's kind of nice, but as Stephen
 said elsewhere on the thread, this seems like a lot of mechanism for
 that narrow goal.  As you (I think) said somewhere on this thread, you
 could just create the extension with a bunch of CREATE and ALTER
 EXTENSION .. ADD statements and set a flag on it that causes it to be
 dumped the same way.  (We might need to add a CREATE EXTENSION foo
 WITH NO CONTENTS statement to really make it work, so that the libpq
 connection can create it as completely empty and then add objects to
 it one at a time, but we shouldn't need too much more than that.)  The
 whole idea of the extension template as such goes away.

Exactly.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 What I've been trying to point out is that there's absolutely zero need
 for the 'extension template' part of this to make a pg_restore work for
 an entirely-in-the-catalog extension.  I realize that's how you've done
 it with this patch set but that doesn't make it necessary.

If it's an extension, it's filtered out of pg_dump, so it's not part of
your pg_restore. Full Stop. This point has been debated and there has
been a very clear conclusion a year ago.

What am I missing here?
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Ian Pilcher
On 12/02/2013 02:17 PM, Tom Lane wrote:
 Ian Pilcher arequip...@gmail.com writes:
 Yes.  And the problem is that there is no way to prevent OpenSSL from
 accepting intermediate certificates supplied by the client.  As a
 result, the server cannot accept client certificates signed by one
 intermediate CA without also accepting *any* client certificate that can
 present a chain back to the root CA.
 
 Isn't that sort of the point?
 

I'm not sure what you're asking.  The desired behavior (IMO) would be to
accept client certificates signed by some intermediate CAs without
accepting any client certificate that can present a chain back to the
trusted root.  This is currently not possible, mainly due to the way
that OpenSSL works.

-- 

Ian Pilcher arequip...@gmail.com
   Sent from the cloud -- where it's already tomorrow



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-12-02 Thread Alvaro Herrera
Dean Rasheed escribió:

 I think that memory gets freed at the end of the DROP command, so I
 don't think this is a concern. In any case, that RangeVar is only of
 order 50 bytes. If we were concerned about memory leakage here, a
 bigger concern would be the calling code in does_not_exist_skipping(),
 which is using NameListToString() which allocates at least 1024 bytes
 for the name of the non-existent object without freeing it.

Fair enough.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 On Sun, 2013-12-01 at 22:34 -0500, Stephen Frost wrote:
  Perhaps I'm missing something- but we already *have* a catalog
  representation for every extension that's ever installed into a given
  database.  A representation that's a heck of a lot better than a big
  text blob.
 
 I meant extension template represented in the catalog.

Ok.

   But bringing more of an extension into the catalog can be done, and I
   think we'll see big benefits from that.
  
  I'm not following here- what's 'missing'?

The above was in reference to an extension, not an extension template.

 It seems that you are making the assumption that installing an extension
 template or creating an extension are major operations, and anyone
 encountering an error is a superuser with admin access to the server and
 can easily correct it.

I'm not following this train of thought at all..  Unless you're
referring to the case where it involves a .so, in which case then I
*would* expect a superuser with admin access to the server to be
involved.  For the non-.so-including case, I'm hopeful we can provide
a mechanism which doesn't require a superuser or admin at all.

 If the admin messes up and the extension template isn't there (e.g.
 after a failover), the person to encounter the error at CREATE EXTENSION
 time might not have admin access or there might be a process required to
 deploy the new files. But if the extension templates were carried along
 with replication and backup naturally, then they'd be there.

The *extension* is already brought along with replication and binary
backup naturally, for non-.so-based extensions.  For logical backups, we
depend on the filesystem to provide the extension definition- I don't
particularly like that and it *isn't necessary* for non-.so-based
extensions.  In many ways, we do it the way we do specifically due to
how we distribute -contrib, as Dimitri has pointed out.  What I'd like
to see is support for dumping the extension's objects during a logical
dump, perhaps only for specific types of extensions (ones installed that
way), and maybe with some additional caveats on it (should we have
multiple extension namespaces?  or come up with some other name for
these objects?).

For .so-based extensions, yes, an admin is going to need to be involved
and I am not impressed by these notions that such a requirement is
unreasonable or overly draconian.

 And it would be nice if there was some hope for non-superusers to create
 extension templates, but that will never happen as long as they are
 files.

Agreed and as a follow-on to that, I'm pretty sure .so's will always be
files...

  You need a running PG for the *extension* to be installed, but with the
  filesystem-based extension approach we have today, the template (which
  are the files on the filesystem) don't need PG running
 
 I think you misread -- this is a list of issues if we move templates
 into the catalog. File-based templates obviously don't have this
 problem.

I don't see us being able to completely get rid of the existing
extension system which uses the file-based approach.  Certainly not for
a very long time, so I'm not sure that there's much discussion to be
had along these lines.

 I was just suggesting that a little more information in the catalog
 could improve dependency tracking and error handling. I'm not suggesting
 we ship any shared libraries anywhere -- that's still up to extension
 authors and PGXN. I'm also not suggesting that the error handling will
 be perfect or catch subtle mismatches.

Having more information about an extension in the catalog seems
reasonable to me- along the same lines of how we have dependency
information in dpkg for .deb's.  What I don't like is the notion of
upgrade scripts which are installed into the catalog as *scripts*.
Ditto for initial extension creation.  We already know how to drop and
recreate every object in the database, let's make use of that mechanism
instead.  That doesn't allow pg_dump/restore to *invisibly upgrade an
extension* (which is what happens now..), but I find that to be an
entirely good thing...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Andrew Dunstan


On 12/02/2013 03:21 PM, Ian Pilcher wrote:

On 12/02/2013 02:17 PM, Tom Lane wrote:

Ian Pilcher arequip...@gmail.com writes:

Yes.  And the problem is that there is no way to prevent OpenSSL from
accepting intermediate certificates supplied by the client.  As a
result, the server cannot accept client certificates signed by one
intermediate CA without also accepting *any* client certificate that can
present a chain back to the root CA.

Isn't that sort of the point?


I'm not sure what you're asking.  The desired behavior (IMO) would be to
accept client certificates signed by some intermediate CAs without
accepting any client certificate that can present a chain back to the
trusted root.  This is currently not possible, mainly due to the way
that OpenSSL works.




Wouldn't that amount to only partially trusting the root? It seems kinda 
odd. In any case, It's not something I think Postgres needs to solve.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Tom Lane
Ian Pilcher arequip...@gmail.com writes:
 On 12/02/2013 02:17 PM, Tom Lane wrote:
 Isn't that sort of the point?

 I'm not sure what you're asking.  The desired behavior (IMO) would be to
 accept client certificates signed by some intermediate CAs without
 accepting any client certificate that can present a chain back to the
 trusted root.  This is currently not possible, mainly due to the way
 that OpenSSL works.

That notion seems pretty bogus to me.  If you don't trust the root CA to
not hand out child CA certs to untrustworthy people, then you don't really
trust the root CA, do you?  You should just list the certs of the
intermediate CAs you *do* trust in the server's root.crt.

In any case, the idea that this is somehow OpenSSL's fault and another
implementation of the same protocol wouldn't have the same issue sounds
pretty silly.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Robert Haas
On Mon, Dec 2, 2013 at 3:02 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Greg Stark st...@mit.edu writes:
 On Mon, Dec 2, 2013 at 6:30 PM, Robert Haas robertmh...@gmail.com wrote:
 OK, I'll bite.  I've been trying to stay out of this thread, but I
 really *don't* understand what this patch is about.  Extensions, as
 they exist today, are installed from the filesystem and their contents
 are not dumped.  You're trying to create a new kind of extension which
 is installed from the system catalogs (instead of the file system) and
 is dumped.  Why should anyone want that?

 To benefit from ALTER EXTENSION … UPDATE … and \dx.

I don't see what benefit we get as far as ALTER EXTENSION .. UPDATE is
concerned.  You've got to do some kind of DDL to load the update
script into the database, and then you've got to run the UPDATE
itself.  That's pretty complicated.  You could have done the same
thing by just ALTERing the extension, without actually needing the
update script uploaded into the database.

 And technically the extension is not dumped, its templates are.

But you *could* dump the extension, and in fact it would be simpler
than what you're actually doing.

 It seems that part of the answer is that people would like to be able
 to install extensions via libpq.  You could almost write a client-side
 tool for that today just by using adminpack to write the files to the
 server, but you'd trip over the fact that files written by adminpack
 must be in either the data directory or the log directory.  But we
 could fix that easily enough.

 Trick question: when you've implemented said client and used it for a
 couple of (in-house) extensions, what do you think should happen at
 pg_restore time?

 Hint: in a properly designed ops model, pg_restore happens each and
   every day when the unattended cron job “rebases” the QA or testing
   environments from the production PITR backups, of course.

/me shrugs.  You could just as well ask what happens when you've
installed postgresql but not postgresql-contrib and try to restore a
database that needs hstore.  There's no getting around the fact that
if the extension isn't available, installation will fail.  Nor is that
a bug.  Still, I agree that there are use cases where that's not what
you want, and discussed some of them in the rest of the email from
which you took that quote.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 03:01:25PM -0500, Andrew Dunstan wrote:
 I don't fully understand the issues but the discussion seens to indicate
 this.  Am I missing something?  Should I run some tests?
 
 
 AIUI, you need a complete chain from one end to the other. So the
 cert being checked can include the intermediate cert in what it
 sends, or it can be in the root.crt at the other end, but one way or
 another, the checking end needs a complete chain from a root cert to
 the cert from the other end.

Yes, this was my understanding.  Let me ask a simple question --- can
you put only the client cert on the client (postgresql.crt) and only the
root cert on the server (root.crt), and will it work?

I think Tom's question is whether OpenSSL will read through all the
entries in root.crt and find the one that signed the remote cert, and
has it always done that, i.e. does the remote side have to provide the
upper-level cert to match against.

One big thing I learned from this is that the local root.crt is only
used to verify remote certificates;  it isn't related to how the remote
end verifies your certificate.  Now, in most cases, the root.crt is
identical for clients and servers, but it doesn't have to be.

Put another way, I thought you put the root cert in your local root.crt
and the local cert in postgresql.crt or server.crt, but in fact the
requirement is that the local certificate chain to root must be in the
remote root.crt.

Of course, I might be wrong, but I am trying to clarify this for our
users.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  What I've been trying to point out is that there's absolutely zero need
  for the 'extension template' part of this to make a pg_restore work for
  an entirely-in-the-catalog extension.  I realize that's how you've done
  it with this patch set but that doesn't make it necessary.
 
 If it's an extension, it's filtered out of pg_dump, so it's not part of
 your pg_restore. Full Stop. This point has been debated and there has
 been a very clear conclusion a year ago.
 
 What am I missing here?

I don't buy it.  I understand that you heard that from Tom with regard
to extensions- but that was also when he was argueing that what you're
building here shouldn't be called extensions.  I have a huge problem
with the idea that we can't dump out the definition of an
in-catalog-only extension, but it's okay to dump out some big blob of
text which is supposed to represent the same thing but in a much, much
more kludgey and inexact way.  How are we going to handle new keywords
being added in new major versions?  A pg_dump of the extension template
script is then going to be loaded into the new major version but will
not actually be able to be run because it'll error out...

I'm 100% confident that's not the only problem with this approach which
sticks script text blobs into the catalog as representations of
database objects.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Yes, this was my understanding.  Let me ask a simple question --- can
 you put only the client cert on the client (postgresql.crt) and only the
 root cert on the server (root.crt), and will it work?

Yes, that's surely always worked.

 I think Tom's question is whether OpenSSL will read through all the
 entries in root.crt and find the one that signed the remote cert, and
 has it always done that, i.e. does the remote side have to provide the
 upper-level cert to match against.

My point is specifically that it didn't seem to work when the client cert
file includes an intermediate CA cert, but not a full path to a trusted
root cert.  (Note that anything in the server's root.crt file is a trusted
root cert so far as the server is concerned --- it doesn't matter if it's
a child of some other CA.)

 One big thing I learned from this is that the local root.crt is only
 used to verify remote certificates;  it isn't related to how the remote
 end verifies your certificate.  Now, in most cases, the root.crt is
 identical for clients and servers, but it doesn't have to be.

Yes, we were already explaining that in the existing docs.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 03:07:48PM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Mon, Dec  2, 2013 at 12:59:41PM -0500, Tom Lane wrote:
  I see that you removed the sentence
  The root
  certificate should be included in every case where
  filenamepostgresql.crt/ contains more than one certificate.
 
  I don't fully understand the issues but the discussion seens to indicate
  this.  Am I missing something?  Should I run some tests?
 
 My recollection is that if the client cert file includes *only* the
 client's own cert, the server will puzzle out how that connects to the
 certs it has.  However, if the client cert file contains more than one
 cert (ie, client's cert and some intermediate-CA cert), the server
 will *not* try to associate the intermediate cert with some root cert it
 has.  It wants the chain the client sends to terminate in a cert that it
 has listed directly in root.crt.

OK, so you are saying if the client only supplies one cert, it will try
to find a signing cert in root.crt, but if multiple certs are supplied,
you have to get a cert match (not a signing).  I can adjust the docs for
that.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Stephen Frost
* Ian Pilcher (arequip...@gmail.com) wrote:
 On 12/02/2013 02:17 PM, Tom Lane wrote:
  Ian Pilcher arequip...@gmail.com writes:
  Yes.  And the problem is that there is no way to prevent OpenSSL from
  accepting intermediate certificates supplied by the client.  As a
  result, the server cannot accept client certificates signed by one
  intermediate CA without also accepting *any* client certificate that can
  present a chain back to the root CA.
  
  Isn't that sort of the point?

Yes.

 I'm not sure what you're asking.  The desired behavior (IMO) would be to
 accept client certificates signed by some intermediate CAs without
 accepting any client certificate that can present a chain back to the
 trusted root.  This is currently not possible, mainly due to the way
 that OpenSSL works.

It's not possible because of the way certificate chains are *intended*
to work..  What you're asking for is something which was not a use-case
for certificates.  Cross-organizational trusts were done through
'bridge' CAs, which then can have policies about what attributes are
allowed to pass through a bridge or be trusted from a not-our-root-CA.
Intermediate certificates are not the same thing and were never intended
to be used in the way you're asking for.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Andrew Dunstan


On 12/02/2013 03:44 PM, Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:

Let me ask a simple question --- can
you put only the client cert on the client (postgresql.crt) and only the
root cert on the server (root.crt), and will it work?

Yes, that's surely always worked.


Not if the client has been signed by an intermediate CA, surely. Either 
the server must have the intermediate CA cert in its root.crt or the 
client must supply it along with the end cert.


cheers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 03:44:18PM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Yes, this was my understanding.  Let me ask a simple question --- can
  you put only the client cert on the client (postgresql.crt) and only the
  root cert on the server (root.crt), and will it work?
 
 Yes, that's surely always worked.
 
  I think Tom's question is whether OpenSSL will read through all the
  entries in root.crt and find the one that signed the remote cert, and
  has it always done that, i.e. does the remote side have to provide the
  upper-level cert to match against.
 
 My point is specifically that it didn't seem to work when the client cert
 file includes an intermediate CA cert, but not a full path to a trusted
 root cert.  (Note that anything in the server's root.crt file is a trusted
 root cert so far as the server is concerned --- it doesn't matter if it's
 a child of some other CA.)

OK, so you are really saying that a multi-cert client has to supply a
chain right up to the root as the server will not walk the chain for you
up to the root, at least for some versions of openssl --- kind of makes
sense.  The email tester seems to have a version that does, but as you
stated, all versions might not.  Because you said that all root.crt CAs
are treated as trusted, can you just match an intermediate CA that
appears in root.crt?  Do you really need to match the a root CA or just
one in root.crt?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Bruce Momjian
On Mon, Dec  2, 2013 at 03:57:45PM -0500, Andrew Dunstan wrote:
 
 On 12/02/2013 03:44 PM, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
 Let me ask a simple question --- can
 you put only the client cert on the client (postgresql.crt) and only the
 root cert on the server (root.crt), and will it work?
 Yes, that's surely always worked.
 
 Not if the client has been signed by an intermediate CA, surely.
 Either the server must have the intermediate CA cert in its root.crt
 or the client must supply it along with the end cert.

Right.  Tom is saying that for his openssl version, he had to have the
client supply a certificate _matching_ something in the remote root.crt,
not just signed by it.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
 Yes, this was my understanding.  Let me ask a simple question --- can
 you put only the client cert on the client (postgresql.crt) and only the
 root cert on the server (root.crt), and will it work?

Only if the client cert is signed directly by the root cert, which is
generally discouraged..  The intermediate cert needs to be *somewhere*.
As for Tom's question about the client presenting an intermediate cert-
I don't know offhand if that works with our code, but as far as SSL
goes, last I checked it's *intended* to work.

 I think Tom's question is whether OpenSSL will read through all the
 entries in root.crt and find the one that signed the remote cert, and
 has it always done that, i.e. does the remote side have to provide the
 upper-level cert to match against.

Either side should be able to present certificates that build the chain
of trust.  Practically speaking, that may not work due to sheer number
of combinations which you might end up with- I don't recall specifically
if that ends up being an issue or not (it's been a while).

 One big thing I learned from this is that the local root.crt is only
 used to verify remote certificates;  it isn't related to how the remote
 end verifies your certificate.  Now, in most cases, the root.crt is
 identical for clients and servers, but it doesn't have to be.

That's correct, yes.

 Put another way, I thought you put the root cert in your local root.crt
 and the local cert in postgresql.crt or server.crt, but in fact the
 requirement is that the local certificate chain to root must be in the
 remote root.crt.

Right..  You use your cert to prove to the *other* guy that you are who
you claim to be.  Does that help in the understanding..?  You don't need
to prove to yourself who you are..

 Of course, I might be wrong, but I am trying to clarify this for our
 users.

Thanks for working on this.  I do wish we could improve how we handle
certificates in general- and that definitely goes for the documentation
as much as the various options we provide.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
 On Mon, Dec  2, 2013 at 03:57:45PM -0500, Andrew Dunstan wrote:
  
  On 12/02/2013 03:44 PM, Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:
  Let me ask a simple question --- can
  you put only the client cert on the client (postgresql.crt) and only the
  root cert on the server (root.crt), and will it work?
  Yes, that's surely always worked.
  
  Not if the client has been signed by an intermediate CA, surely.
  Either the server must have the intermediate CA cert in its root.crt
  or the client must supply it along with the end cert.
 
 Right.  Tom is saying that for his openssl version, he had to have the
 client supply a certificate _matching_ something in the remote root.crt,
 not just signed by it.

Err, no..  That's not right.

The client certificate needs to be *signed* by the root certificate, or
by an intermediate which is signed by the root and is available to the
server for verification.

The client certificate does *not* need to exist in the root.crt...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Trust intermediate CA for client certificates

2013-12-02 Thread Ian Pilcher
On 12/02/2013 02:29 PM, Andrew Dunstan wrote:
 Wouldn't that amount to only partially trusting the root? It seems kinda
 odd. In any case, It's not something I think Postgres needs to solve.

I think that the fundamental problem is that authentication and
authorization are being conflated.  From the OpenSSL point-of-view, it
is checking that the client certificate is valid (not expired, signed by
a trusted chain of CAs, etc.); i.e. it's only doing authentication.

PostgreSQL is trusting any client certificate that is validated by
OpenSSL.  It's essentially trusting OpenSSL to do both authentication
and authorization, but OpenSSL isn't doing the latter.

Does PostgreSQL need to solve this?  I don't know, but it certainly
would be a nice capability to have -- if only to avoid the confusion
that currently surrounds the issue.

-- 

Ian Pilcher arequip...@gmail.com
   Sent from the cloud -- where it's already tomorrow



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >