Re: SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-19 Thread Robert Haas
On Wed, Dec 18, 2013 at 10:05 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Stephen Frost escribió:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:

  Basically with building `UNIT` we realise with hindsight that we failed to
  build a proper `EXTENSION` system, and we send that message to our users.

 Little difficult to draw conclusions about what out 'hindsight' will
 look like.

 I haven't been keeping very close attention to this, but I fail to see
 why extensions are so much of a failure.  Surely we can invent a new
 kind of extensions, ones whose contents specifically are dumped by
 pg_dump.  Regular extensions, the kind we have today, still wouldn't,
 but we could have a flag, say CREATE EXTENSION ... (WITH DUMP) or
 something.  That way you don't have to come up with UNIT at all (or
 whatever).  A whole new set of catalogs just to fix up a minor issue
 with extensions sounds a bit too much to me; we can just add this new
 thing on top of the existing infrastructure.

Yep.

I'm not very convinced that extensions are a failure.  I've certainly
had plenty of good experiences with them, and I think others have as
well, so I believe Dimitri's allegation that we've somehow failed here
is overstated.  That having been said, having a flag we can set to
dump the extension contents normally rather than just dumping a CREATE
EXTENSION statement seems completely reasonable to me.

ALTER EXTENSION foo SET (dump_members = true/false);

It's even got use cases outside of what Dimitri wants to do, like
dumping and restoring an extension that you've manually modified
without losing your changes.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-19 Thread Cédric Villemain
Le jeudi 19 décembre 2013 14:01:17, Robert Haas a écrit :
 On Wed, Dec 18, 2013 at 10:05 AM, Alvaro Herrera
 
 alvhe...@2ndquadrant.com wrote:
  Stephen Frost escribió:
  * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
   Basically with building `UNIT` we realise with hindsight that we
   failed to build a proper `EXTENSION` system, and we send that message
   to our users.
  
  Little difficult to draw conclusions about what out 'hindsight' will
  look like.
  
  I haven't been keeping very close attention to this, but I fail to see
  why extensions are so much of a failure.  Surely we can invent a new
  kind of extensions, ones whose contents specifically are dumped by
  pg_dump.  Regular extensions, the kind we have today, still wouldn't,
  but we could have a flag, say CREATE EXTENSION ... (WITH DUMP) or
  something.  That way you don't have to come up with UNIT at all (or
  whatever).  A whole new set of catalogs just to fix up a minor issue
  with extensions sounds a bit too much to me; we can just add this new
  thing on top of the existing infrastructure.
 
 Yep.
 
 I'm not very convinced that extensions are a failure.  I've certainly
 had plenty of good experiences with them, and I think others have as
 well, so I believe Dimitri's allegation that we've somehow failed here
 is overstated.  That having been said, having a flag we can set to
 dump the extension contents normally rather than just dumping a CREATE
 EXTENSION statement seems completely reasonable to me.
 
 ALTER EXTENSION foo SET (dump_members = true/false);
 
 It's even got use cases outside of what Dimitri wants to do, like
 dumping and restoring an extension that you've manually modified
 without losing your changes.


Isn't there some raw SQL extension author are supposed to be able to push in 
order to dump partial configuration table and similar things (well, what we're 
supposed to be able to change in an extension).

yes, it is:
SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT 
standard_entry');

(it is raw SQL here, but it is not appreciated for Extension 'Templates'  
I stopped trying to figure/undertand many arguments in those Extension email 
threads)

Maybe something around that to have also the objects created by extension 
dumped, and we're done. I even wnder if Dimitri has not already a patch for 
that based on the work done for Extensions feature.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-18 Thread Dimitri Fontaine
Simon Riggs si...@2ndquadrant.com writes:
 On 17 December 2013 23:42, Tom Lane t...@sss.pgh.pa.us wrote:
 We aim to have the simplest implementation that meets the stated need
 and reasonable extrapolations of that. Text in a catalog table is the
 simplest implementation. That is not a reason to reject it, especially
 when we aren't suggesting a viable alternative.

 The first part of this assertion is debatable, and the claim that no
 viable alternative has been suggested is outright wrong.

With due respect, it's only wrong when you buy into implementing
something new rather than improving extensions.

 Sounds like we have a way forward for this feature then, just not with
 the current patch.

 Can someone attempt to summarise the way forward, with any caveats and
 necessary restrictions? It would save further column inches of debate.

Here's my attempt:

# Inline Extension, Extension Templates

The problem with *Inline Extension* is the dump and restore policy. The
contents of an extensions are not be found in a `pg_dump` script, ever.

The problem with the *Extension Templates* is that we store the extension
scripts (plain text blobs) in the catalogs, where we already have the full
SQL objects and tools (such as `pg_dump` and `pg_depends`) to manipulate and
introspect them.

# The new thing™

A set of SQL objects that can be managed wholesale, with a version string
attached to it. Objects are part of `pg_dump` output, the whole set can be
relocatable, and has a version string attached.

Name:

  - not `PACKAGE`, Oracle
  - not `MODULE`, that's already the name of a .so file
  - not `SYSTEM`, already something else
  - `BUNDLE`
  - `LIBRARY`
  - `UNIT`

I'll pick UNIT here.
  
Commands:

CREATE UNIT name [ SCHEMA ... ] [ [ NOT ] RELOCATABLE ] [ REQUIRE ...];

WITH UNIT name;
  commands
END UNIT name;

ALTER UNIT name OWNER TO role;
ALTER UNIT name ADD object definition;
ALTER UNIT name DROP object definition;
ALTER UNIT name SET SCHEMA new schema;
ALTER UNIT name UPDATE TO version string;
ALTER UNIT name SET [ NOT ] RELOCATABLE;
ALTER UNIT name REQUIRE a, b, c;

COMMENT ON UNIT name IS '';

DROP UNIT name [ CASCADE ];

The `UPDATE TO` command only sets a new version string.

# Implementation details

We need a new `pg_unit` catalog, that looks almost exactly like the
`pg_extension` one, except for the `extconfig` and `extcondition` fields.

We need a way to `recordDependencyOnCurrentUnit()`, so another pair of
static variables `creating_unit` and `CurrentUnitObject`. Each and every
command we do support for creating objects must be made aware of the new
`UNIT` concept, including `CREATE EXTENSION`.

The `pg_dump` dependencies have to be set so that all the objects are
restored independently first, as of today, and only then issue `CREATE
UNIT` and a bunch of `ALTER UNIT ADD` commands, one per object.

# Event Trigger support

Event Triggers are to be provided for all the `UNIT` commands.

# Life with Extensions and Units

PostgreSQL now includes two different ways to package SQL objects, with
about the same feature set. The only difference is the `pg_restore`
behavior: *Extensions* are re-created from external resources, *Units* are
re-created from what's in the dump.

The smarts about `ALTER EXTENSION ... UPDATE` are not available when dealing
with *UNITS*, leaving the user or the client scripts to care about that
entirely on their own.

In principle, a client can prepare a SQL script from a PGXN distribution and
apply it surrounded by `WITH UNIT` and `END UNIT` commands.

Upgrade scripts, once identified, can be run as straight SQL, adding a
simple `ALTER UNIT ... UPDATE TO ...` command before the `COMMIT` at the
end of the script. Identifying the upgrade script(s) may require
implementing current Extension update smarts into whatever client side
program is going to be built to support installing from PGXN etc.

# Conclusion

The main advantage of the `UNIT` proposal is that it copes very well with
relations and other usual schema objects, as the data are preserved at
`pg_restore` time.

A `UNIT` can also entirely replace an `EXTENSION`, including when it needs a
*module*, provided that the *module* is made available on the server's file
system before creating the functions in `LANGUAGE C` that depend on it.

It is possible to write a *UNIT distribution network* where a client
software drives the installation of SQL objects within an UNIT, and this
client software needs to include UNIT update smarts too. It's possible also
to build that software as a set of Event Triggers on the `CREATE UNIT` and
`ALTER UNIT UPDATE TO` commands.

# Analysis

The main drawback is that rather than building on extensions, both in a
technical way and in building user trust, we are basically going to
deprecate extensions entirely, giving them a new name an an incompatible way
to manage them.

Only *contribs* are going to be shipped as extensions, as they are 

Re: [HACKERS] Extension Templates S03E11

2013-12-18 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 I keep telling you this, and it keeps not sinking in.

How can you say that? I've been spending a couple of years on designing
and implementing and arguing for a complete feature set where dealing
with modules is avoided at all costs.

The problem we have now is that I'm being told that the current feature
is rejected if it includes anything about modules, and not interesting
enough if it's not dealing with modules.

I tried my best to make it so that nothing in-core changes wrt modules,
yet finding out-of-core solutions to still cope with that. It's a
failure, ok.

I think we need a conclusion on this thread: Extension specs are frozen.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-18 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Here's my attempt:
 
 # Inline Extension, Extension Templates
 
 The problem with *Inline Extension* is the dump and restore policy. The
 contents of an extensions are not be found in a `pg_dump` script, ever.

You keep coming back to this and I think you're taking too narraw a view
to the comments made on the prior threads.  No, we don't really want
extensions which have .sql files out on disk somewhere as part of
them to be dumped out through pg_dump because then it becomes unclear
which set of scripts should be used during restore.  What we're talking
about here is intended to not have that issue by using a different
namespace, a flag, something which identifies these extensions as being
defined through the catalog instead.

 # The new thing™
 
 A set of SQL objects that can be managed wholesale, with a version string
 attached to it. Objects are part of `pg_dump` output, the whole set can be
 relocatable, and has a version string attached.

I'd like to see more than just a single version string included and I
think that'd be beneficial for extensions too.

 Name:
[...]
 I'll pick UNIT here.

We can figure that later.

 Commands:
 
 CREATE UNIT name [ SCHEMA ... ] [ [ NOT ] RELOCATABLE ] [ REQUIRE ...];
 
 WITH UNIT name;
   commands
 END UNIT name;

Interesting approach- I had considered something similar by having a
'fake' schema created into which you built up the 'UNIT'.  The reason I
was thinking schema instead of begin/end style commands, as you have
above, is because of questions around transactions.  Do you think the
syntax you have here would require the definition to be all inside of a
single transaction?  Do we feel that would even be an issue or perhaps
that it *should* be done that way?  I don't currently have any strong
feelings one way or the other on this and I'm curious what others
think.

 The `UPDATE TO` command only sets a new version string.

So, one of the things I had been wondering about is if we could provide
a 'diff' tool.  Using your 'WITH UNIT' syntax above, an author might
need to only write their initial implementation, build up a 'UNIT' based
on it, then adjust that implementation with another 'WITH UNIT' clause
and then ask PG for the differences.  It's not clear if we could make
that work but there is definitely a set of desireable capabilities out
there, which some other databases have, around automated upgrade script
building and doing schema differences.

 # Implementation details
 # Event Trigger support

Not sure we're really ready to get into these yet.

 The main drawback is that rather than building on extensions, both in a
 technical way and in building user trust, we are basically going to
 deprecate extensions entirely, giving them a new name an an incompatible way
 to manage them.

I don't see this as ending up deprecating extensions, even if we build a
new thing with a new name.  I would argue that properly supported
extensions, such as those in contrib and the other 'main' ones, like
PostGIS, and others that have any external dependencies (eg: FDWs) would
almost certainly continue as extensions and would be packaged through
the normal OS packaging systems.  While you plan to use the event
trigger mechanism to build something on top of this which tries to act
like extenisons-but-not, I think that's an extremely narrow and limited
use-case that very few people would have any interest in or use.

 Basically with building `UNIT` we realise with hindsight that we failed to
 build a proper `EXTENSION` system, and we send that message to our users.

Little difficult to draw conclusions about what out 'hindsight' will
look like.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-18 Thread Alvaro Herrera
Stephen Frost escribió:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:

  Basically with building `UNIT` we realise with hindsight that we failed to
  build a proper `EXTENSION` system, and we send that message to our users.
 
 Little difficult to draw conclusions about what out 'hindsight' will
 look like.

I haven't been keeping very close attention to this, but I fail to see
why extensions are so much of a failure.  Surely we can invent a new
kind of extensions, ones whose contents specifically are dumped by
pg_dump.  Regular extensions, the kind we have today, still wouldn't,
but we could have a flag, say CREATE EXTENSION ... (WITH DUMP) or
something.  That way you don't have to come up with UNIT at all (or
whatever).  A whole new set of catalogs just to fix up a minor issue
with extensions sounds a bit too much to me; we can just add this new
thing on top of the existing infrastructure.

I didn't much like the WITH UNIT/END UNIT thingy.  What's wrong with
CREATE foo; ALTER EXTENSION ADD foo?  There's a bit of a problem that if 
you create the object and die before being able to add it to the
extension, it would linger unreferenced; but that's easily fixable by
doing the creation in a transaction, I think.  (Alternatively, we could
have a single command that creates the extension and the contained
objects in one fell swoop, similar to how CREATE SCHEMA can do it; but
I'm not sure that's all that much better, and from a grammar POV it
probably sucks.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-18 Thread Greg Stark
Yeah I think this whole discussion is happening at the wrong level. The
problem you're having, despite appearances, is not that people disagree
about the best way to accomplish your goals.

The problem is that not everyone is convinced your goals are a good idea.
Either they just don't understand the goals or they do understand them but
don't agree that they're a good idea.

Personally I'm in the former category and would welcome a detailed
explanation of the goals of the feature and what use cases those goals
enable.

I think Tom is in the later category and needs a very good argument for why
those goals are important enough to outweigh the downsides.

I don't think loading shared libraries from RAM or a temp download
directory is a *complete* show stopper the way Tom says but it would
require a pretty compelling use case to make it worth the difficulties it
would cause.

-- 
greg


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Right.  I think a lot of the tension comes from people being unconvinced
 that the existing extension feature is an ideal model for this sort of
 use-case.  Extensions were mainly designed around the notion of a .so

The effort here is all about extending the Extension Use Case, yes.

 OTOH, for a set of pure-SQL objects, it's not necessary that there be a
 canonical text file somewhere, and we have in principle complete knowledge
 of the objects' semantics as well as the ability to dump-and-restore into
 newer PG versions.  So it's not at all clear that we should just adopt the
 existing model with the smallest possible changes --- which AFAICS is
 basically what this proposal is.  Maybe that's the way to go, but we
 should consider alternatives, and in particular I think there is much
 more reason to allow inside-the-database mutation of the SQL objects.

My thinking is that if we invent a new mechanism for extensions that are
not managed like contribs, we will find out that only contribs are going
to be using extensions.

Given the options of either growing extensions into being able to cope
with more than a single model or building an entirely new system having
most of the same feature set than Extensions, I'm pushing for the option
where we build on top of what we have already.

 I think the name Extension Templates is horrible because it misleads
 all of us on this list into thinking the proposed feature is completely
 something other than what it is.  I don't have a better name offhand,
 but that's got to change before it becomes a feature.

 Given your previous para, I wonder if library or package would work
 better.  I agree that template isn't le mot juste.

We can't use “package” because it means something very different in
direct competition. I have other propositions, but they are only
relevant if we choose not to improve Extensions… right?

Regards,
-- 
Dimitri Fontaine06 63 07 10 78
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
  OTOH, for a set of pure-SQL objects, it's not necessary that there be a
  canonical text file somewhere, and we have in principle complete knowledge
  of the objects' semantics as well as the ability to dump-and-restore into
  newer PG versions.  So it's not at all clear that we should just adopt the
  existing model with the smallest possible changes --- which AFAICS is
  basically what this proposal is.  Maybe that's the way to go, but we
  should consider alternatives, and in particular I think there is much
  more reason to allow inside-the-database mutation of the SQL objects.
 
 My thinking is that if we invent a new mechanism for extensions that are
 not managed like contribs, we will find out that only contribs are going
 to be using extensions.

That's only accurate if the new mechanism supports .so's, which seems
unlikely to be the case.  What I think we'd end up with is a split
between extensions, which would be things containing .so's, and
libraries or what-have-you, which would be more-or-less everything
else.  That kind of a break-down strikes me as perfectly reasonable.
There would also be flexability in that an author might choose to use an
extension even in cases where it's not strictly necessary to do so, for
whatever reason they want.

 Given the options of either growing extensions into being able to cope
 with more than a single model or building an entirely new system having
 most of the same feature set than Extensions, I'm pushing for the option
 where we build on top of what we have already.

I'm not sure that we need to throw away everything that exists to add on
this new capability; perhaps we can build a generic versioned
object-container system on which extensions and
packages/libraries/classes/whatever can also be built on (or perhaps
that's what 'extensions' end up morphing into).

 We can't use “package” because it means something very different in
 direct competition. I have other propositions, but they are only
 relevant if we choose not to improve Extensions… right?

I'd like to see extensions improved.  I don't feel like the proposed
'extension templates' is the way to do that because I don't think it
really solves anything and it adds a layer that strikes me as wholly
unnecessary.  I could see pulling in the control file contents as a
catalog, adding in dependency information which could be checked
against, perhaps hard vs. soft dependencies, and other things that make
sense to track for extensions-currently-installed into a given database.

However, as I understand it from the various discussions on this topic
outside of this list, the immediate concern is the need for a multi-OS
extension distribution network with support for binaries, .so's and
.dll's and whatever else, to make installing extensions easier for
developers on various platforms.  I'm all for someone building that and
dealing with the issues associated with that, but building a client for
it in core, either in a way where a backend would reach out and
download the files or accepting binary .so's through the frontend
protocol, isn't the first step in that and I very much doubt it would
ever make sense.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 My thinking is that if we invent a new mechanism for extensions that are
 not managed like contribs, we will find out that only contribs are going
 to be using extensions.

 That's only accurate if the new mechanism supports .so's, which seems
 unlikely to be the case.

Really?

Look at dynamic_library_path, then at a classic CREATE FUNCTION command
that maps into a C provided symbol:

  CREATE OR REPLACE FUNCTION prefix_range_in(cstring)
  RETURNS prefix_range AS '$libdir/prefix' LANGUAGE C IMMUTABLE STRICT;

A packaging or distribution software will have no problem removing the
'$libdir/' part of the magic AS string here. Once removed, prefix.so
will be loaded from anywhere on the file system paths listed into the
dynamic_library_path GUC.

So now, you don't need anymore to have file system write privileges into
a central place owned by root, it can be anywhere else, and the backend
hooks, when properly setup, will be able to benefit from that.

The missing bits are where to find the extension control files and
scripts.

The only reason why the current proposal mention *nothing* about how to
deal with modules (.so etc) is because each and every time a mention is
made about that problem, the answer from Tom is “rejected, full stop”.

 What I think we'd end up with is a split
 between extensions, which would be things containing .so's, and
 libraries or what-have-you, which would be more-or-less everything
 else.  That kind of a break-down strikes me as perfectly reasonable.

Only if it's the best we can do.

 There would also be flexability in that an author might choose to use an
 extension even in cases where it's not strictly necessary to do so, for
 whatever reason they want.

Note that of course you can still install proper OS packages when we
ship with support for Extension Templates.

 I'd like to see extensions improved.  I don't feel like the proposed
 'extension templates' is the way to do that because I don't think it
 really solves anything and it adds a layer that strikes me as wholly
 unnecessary.

You still didn't propose any other way to have at it, where it's already
my fourth detailed proposal. I did spend time on designing what I think
you're trying to say hand-wavely in that exchange, and I don't quite
like the result, as I see now way for it not to entirely deprecate
extensions.

Maybe the proper answer is that we should actually confine extensions to
being the way to install contribs and nothing else, and deprecate them
for cases where you don't have an OS level package.  It seems really
strange to build a facility with such a generic name as “extension” only
to resist changing any of it, then stop using it at first opportunity.

Also, I'm not sure about the consequences in terms of user trust if we
build something new to solve a use case that looks so much the same.

 However, as I understand it from the various discussions on this topic
 outside of this list, the immediate concern is the need for a multi-OS
 extension distribution network with support for binaries, .so's and
 .dll's and whatever else, to make installing extensions easier for
 developers on various platforms.  I'm all for someone building that and
 dealing with the issues associated with that, but building a client for
 it in core, either in a way where a backend would reach out and
 download the files or accepting binary .so's through the frontend
 protocol, isn't the first step in that and I very much doubt it would
 ever make sense.

That's exactly the reason why the first piece of that proposal has
absolutely nothing to do with building said client, and is all about how
NOT to have to build it in core *ever*.

If you don't like what I'm building because it's not solving the problem
you want to solve… well don't use what I'm building, right?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  My thinking is that if we invent a new mechanism for extensions that are
  not managed like contribs, we will find out that only contribs are going
  to be using extensions.
 
  That's only accurate if the new mechanism supports .so's, which seems
  unlikely to be the case.
 
 Really?

Yes.  The 'new mechanism' to which I was referring was when the entire
XXX (extension, library, package, whatever) is in the PG catalog and not
managed through files on the filesystem, as contrib-like extensions are.

I'm quite aware that what you're asking for is technically possible-
that's not what this discussion is about.

 The only reason why the current proposal mention *nothing* about how to
 deal with modules (.so etc) is because each and every time a mention is
 made about that problem, the answer from Tom is “rejected, full stop”.

Perhaps I'm not making myself clear here, but *I agree with Tom* on this
point.

  There would also be flexability in that an author might choose to use an
  extension even in cases where it's not strictly necessary to do so, for
  whatever reason they want.
 
 Note that of course you can still install proper OS packages when we
 ship with support for Extension Templates.

With the various naming conflicts and other risks associated with doing
that, which I don't believe were very clearly addressed.  An
off-the-cuff answer to that issue is not sufficient, imv.

 You still didn't propose any other way to have at it, where it's already
 my fourth detailed proposal. 

I didn't outline a proposal which provides what you want, no.  That was
intentional.

 I did spend time on designing what I think
 you're trying to say hand-wavely in that exchange, and I don't quite
 like the result, as I see now way for it not to entirely deprecate
 extensions.

I don't think we need to, nor should we, deprecate extensions entirely
when that's the approach which *should be* used for .so requiring
extensions.  Obviously, that's my opinion, and you don't agree with it,
and it seems neither of us is willing to shift from that position.

 Maybe the proper answer is that we should actually confine extensions to
 being the way to install contribs and nothing else, and deprecate them
 for cases where you don't have an OS level package.  It seems really
 strange to build a facility with such a generic name as “extension” only
 to resist changing any of it, then stop using it at first opportunity.

I'm open to changing how extensions work, to adding dependency
information and making other improvements.  Being interested in
improving the extension system doesn't mean I'm required to support
shipping .so's in this manner or installing text script blobs into
catalog tables.

  However, as I understand it from the various discussions on this topic
  outside of this list, the immediate concern is the need for a multi-OS
  extension distribution network with support for binaries, .so's and
  .dll's and whatever else, to make installing extensions easier for
  developers on various platforms.  I'm all for someone building that and
  dealing with the issues associated with that, but building a client for
  it in core, either in a way where a backend would reach out and
  download the files or accepting binary .so's through the frontend
  protocol, isn't the first step in that and I very much doubt it would
  ever make sense.
 
 That's exactly the reason why the first piece of that proposal has
 absolutely nothing to do with building said client, and is all about how
 NOT to have to build it in core *ever*.

You can already build what you're after without extension templates
entirely, if you're allowing files to be stashed out on the filesystem
anywhere.  Your argument that you need root doesn't hold any water with
me on this issue- there's quite a few mechanisms out there already which
allow you to trivially become root.  You can write pl/perlu which sudo's
and apt-get installs your favorite extension, if you like.  That doesn't
mean we should build a system into core which tries to do that for you.

And, yes, I know that you pushed for and got the GUC in to allow you to
have other places to pull .so's from.  Having that flexibility doesn't
mean we have to support populating that directory from PG.  You probably
would have been better off pushing for a GUC that allowed a '.d' like
directory system for extensions to be defined in.  That *still* doesn't
require extension templates, storing SQL scripts as text blobs in
catalog tables, and you can even avoid the whole 'root' concern if you
want.

 If you don't like what I'm building because it's not solving the problem
 you want to solve… well don't use what I'm building, right?

I'm pretty sure that I've pointed out a number of issues that go well
beyond not liking it.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Stephen Frost sfr...@snowman.net writes:
 That's only accurate if the new mechanism supports .so's, which seems
 unlikely to be the case.

 Really?

Yes, really.

 So now, you don't need anymore to have file system write privileges into
 a central place owned by root, it can be anywhere else,

Modern OSes have security checks that can prevent loading libraries from
random places.  This is widely seen as not merely a good thing, but
security-critical for network-exposed daemons.  Of which we are one.

I keep telling you this, and it keeps not sinking in.  One more time: any
feature that does what you want will be dead on arrival so far as vendors
like Red Hat are concerned.  I don't care how creatively you argue for it,
they will refuse to ship it (or at least refuse to disable the SELinux
policy that prevents it).  Period.  Please stop wasting my time with
suggestions otherwise, because it won't happen.

So what we have left to discuss is whether we want to develop, and base a
community extension-distribution infrastructure on, a mechanism that some
popular vendors will actively block.  I'm inclined to think it's a bad
idea, but I just work here.

 If you don't like what I'm building because it's not solving the problem
 you want to solve… well don't use what I'm building, right?

What worries me is that time and effort will go into this instead of
something that would be universally acceptable/useful.  I grant that
there are some installations whose security policies are weak enough
that they could use what you want to build.  But I'm not sure how
many there are, and I'm worried about market fragmentation if we need
to have more than one distribution mechanism.

Of course, we're already talking about two distribution infrastructures
(one for packages including .so's, and one for those without).  I see no
way around that unless we settle for the status quo.  But what you're
suggesting will end up with three distribution infrastructures, with
duplicative methods for packages including .so's depending on whether
they're destined for security-conscious or less-security-conscious
platforms.  I don't want to end up with that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Josh Berkus
On 12/16/2013 11:44 AM, Tom Lane wrote: Right.  I think a lot of the
tension comes from people being unconvinced
 that the existing extension feature is an ideal model for this sort of
 use-case.  Extensions were mainly designed around the notion of a .so
 with some SQL datatype/function/etc declarations that have to match up
 with the C code.  So it made sense for them to be relatively static things
 that live in the filesystem.  Notably, if you're migrating to a new PG
 major version, you're at the very least gonna have to recompile the C code
 and quite likely will need to change it some.  So adapting the SQL code
 if needed goes along with that, and would never be automatic in any case.

I see what you mean.  On the other hand:

a) introducing a new concept would require a new reserved word

b) it would also require figuring out how it interacts with extensions

c) extensions already have versioning, which this feature needs

d) extensions already have dependancies, which this feature needs

While it splits Extensions into two slightly different concepts, I find
that on the whole less confusing than the alternative.

On 12/16/2013 05:17 PM, Jim Nasby wrote:
 Somewhat related to this, I really wish Postgres had the idea of a
 class, that was allowed to contain any type of object and could be
 instantiated when needed. For example, if we had an address class,
 we could instantiate it once for tracking our customer addresses, and a
 second time for tracking the addresses customers supply for their
 employers. Such a mechanism would probably be ideal for what we need,
 but of course you'd still have the question of how to load a class
 definition that someone else has published.

Well, the idea originally (POSTGRES) was for the Type, Domain, and
Inheritance system to do just what you propose.  Nobody ever worked out
all the practicalities and gotchas to make it really work in production,
though.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Simon Riggs
On 13 December 2013 18:42, Stephen Frost sfr...@snowman.net wrote:
 * Jeff Davis (pg...@j-davis.com) wrote:
 For what it's worth, I think the idea of extension templates has good
 conceptual integrity. Extensions are external blobs. To make them work
 more smoothly in several ways, we move them into the catalog. They have
 pretty much the same upsides and downsides of our existing extensions,
 aside from issues directly related to filesystem vs. catalog.

 I've never particularly liked the idea that extensions are external
 blobs, to be honest.

I've been reading this, trying to catch back up with hackers. This
thread is amazing because this feature ought to be a shoe-in.

Jeff expresses his points politely, but not strongly enough. I agree with him.

I keep seeing people repeat I don't like blobs as if that were an
objection. There is no danger or damage from doing this. I can't see
any higher beauty that we're striving for by holding out. Why not
allow the user to choose XML, JSON, YAML, or whatever they choose.

Some things need to wait for the right design, like RLS, for a variety
of reasons. I don't see any comparison here and I can't see any reason
for a claim of veto on grounds of higher wisdom to apply to this case.

Blocking this stops nothing, it just forces people to do an extra
non-standard backflip to achieve their goals. Is that what we want?
Why?

 Stephen had some legitimate concerns. I don't entirely agree, but they
 are legitimate concerns, and we don't want to just override them.

 At the same time, I'm skeptical of the alternatives Stephen offered
 (though I don't think he intended them as a full proposal).

 It was more thoughts on how I'd expect these things to work.  I've also
 been talking to David about what he'd like to see done with PGXN and his
 thinking was a way to automate creating RPMs and DEBs based on PGXN spec
 files, but he points out that the challenge there is dealing with
 external dependencies.

 So right now I'm discouraged about the whole idea of installing
 extensions using SQL. I don't see a lot of great options. On top of
 that, the inability to handle native code limits the number of
 extensions that could make use of such a facility, which dampens my
 enthusiasm.

 Yeah, having looked at PGXN, it turns out that some 80+% of the
 extensions there have .c code included, something well beyond what I was
 expecting, but most of those cases also look to have external
 dependencies (eg: FDWs), which really makes me doubt this notion that
 they could be distributed independently and outside of the OS packaging
 system (or that it would be a particularly good idea to even try...).

That is clear evidence that the packaging is getting in the way of
extensions that don't include binary programs.

My only personal interest in this is to stimulate the writing of
further extensions, which is fairly clearly hampered by the overhead
required for packaging.

Who needs old fashioned package management? Some bigger extensions
need it. Smaller ones don't. Who are we to force people to distribute
their wares in only certain ways?

I can't see any reason to block this, nor any better design than the
flexible, neutral and simple one proposed. If there's some secret
reason to block this, please let me know off list cos I currently
don't get it at all.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Stephen Frost
* Simon Riggs (si...@2ndquadrant.com) wrote:
 I keep seeing people repeat I don't like blobs as if that were an
 objection. There is no danger or damage from doing this. I can't see
 any higher beauty that we're striving for by holding out. Why not
 allow the user to choose XML, JSON, YAML, or whatever they choose.

I have no idea where you're going with this, but I *do* object to
sticking an SQL script which defines a bunch of objects into a catalog
table *right next to where they are properly defined*.  There's just no
sense in it that I can see, except that it happens to mimic what we do
today- to no particular purpose.

 Blocking this stops nothing, it just forces people to do an extra
 non-standard backflip to achieve their goals. Is that what we want?
 Why?

It's hardly non-standard when it's required for 80+% of the extensions
that exist today anyway.

 That is clear evidence that the packaging is getting in the way of
 extensions that don't include binary programs.

I'm totally on-board with coming up with a solution for extensions which
do not include .so's.  Avoiding mention of the .so issue doesn't somehow
change this solution into one which actually solves the issue around
non-binary extensions.

 My only personal interest in this is to stimulate the writing of
 further extensions, which is fairly clearly hampered by the overhead
 required for packaging.

I'm not convinced of that but I agree that we can do better and would
like to see a solution which actually makes progress in that regard.  I
don't feel that this does that- indeed, it hardly changes the actual
packaging effort required of extension authors at all.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Heikki Linnakangas

On 12/17/2013 08:32 PM, Stephen Frost wrote:

* Simon Riggs (si...@2ndquadrant.com) wrote:

My only personal interest in this is to stimulate the writing of
further extensions, which is fairly clearly hampered by the overhead
required for packaging.


I'm not convinced of that but I agree that we can do better and would
like to see a solution which actually makes progress in that regard.  I
don't feel that this does that- indeed, it hardly changes the actual
packaging effort required of extension authors at all.


I'll repeat my requirement: the same extension must be installable the 
old way and the new way. I've lost track which of the ideas being 
discussed satisfy that requirement, but I object to any that doesn't.


Considering that, I don't see how any if this is going to reduce the 
overhead required for packaging. An extension author will write the 
extension exactly the same way he does today. Perhaps you meant the 
overhead of installing an extension, ie. the work that the DBA does, not 
the work that the extension author does?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Simon Riggs
On 17 December 2013 17:54, Tom Lane t...@sss.pgh.pa.us wrote:

 So now, you don't need anymore to have file system write privileges into
 a central place owned by root, it can be anywhere else,

 Modern OSes have security checks that can prevent loading libraries from
 random places.  This is widely seen as not merely a good thing, but
 security-critical for network-exposed daemons.  Of which we are one.

 I keep telling you this, and it keeps not sinking in.  One more time: any
 feature that does what you want will be dead on arrival so far as vendors
 like Red Hat are concerned.  I don't care how creatively you argue for it,
 they will refuse to ship it (or at least refuse to disable the SELinux
 policy that prevents it).  Period.  Please stop wasting my time with
 suggestions otherwise, because it won't happen.

 So what we have left to discuss is whether we want to develop, and base a
 community extension-distribution infrastructure on, a mechanism that some
 popular vendors will actively block.  I'm inclined to think it's a bad
 idea, but I just work here.

Yes, there is a strong argument against enabling Postgres
out-of-the-box to allow loading of .so files from random places and
bypassing distro security procedures.

But that argument doesn't apply to all types of extension. For example, data.

In any case, right now, its easy to load an FDW and then do an INSERT
SELECT from a foreign server into a text column. There are no
restrictions on URLs to access foreign servers. Then write a *trusted*
PL/pgSQL procedure to execute the contents of the text column to do
whatever. All you need is the Postgres foreign data wrapper loaded, an
insecure URL and a route to it.

I don't see a material difference between that route and the new one
proposed. The only difference is really that the new route would be
blessed as being the officially recommended way to import objects
without needing a file, and to allow them to be backed up and
restored.

So perhaps all we need is a module that once loaded allows other
things to be loaded. (Red Hat etc can then ban that as they see fit.)

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Simon Riggs
On 17 December 2013 18:32, Stephen Frost sfr...@snowman.net wrote:
 * Simon Riggs (si...@2ndquadrant.com) wrote:
 I keep seeing people repeat I don't like blobs as if that were an
 objection. There is no danger or damage from doing this. I can't see
 any higher beauty that we're striving for by holding out. Why not
 allow the user to choose XML, JSON, YAML, or whatever they choose.

 I have no idea where you're going with this, but I *do* object to
 sticking an SQL script which defines a bunch of objects into a catalog
 table *right next to where they are properly defined*.  There's just no
 sense in it that I can see, except that it happens to mimic what we do
 today- to no particular purpose.

The purpose is clear: so it is part of the database backup. It's a
fairly boring purpose, not fancy at all. But it is a purpose, indeed
*the* purpose.

I don't see any technical objection here.

We aim to have the simplest implementation that meets the stated need
and reasonable extrapolations of that. Text in a catalog table is the
simplest implementation. That is not a reason to reject it, especially
when we aren't suggesting a viable alternative.

I have zero attachment to this design, my interest is in the feature.
How do we achieve the feature if not this way?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On 17 December 2013 18:32, Stephen Frost sfr...@snowman.net wrote:
 I have no idea where you're going with this, but I *do* object to
 sticking an SQL script which defines a bunch of objects into a catalog
 table *right next to where they are properly defined*.  There's just no
 sense in it that I can see, except that it happens to mimic what we do
 today- to no particular purpose.

 The purpose is clear: so it is part of the database backup. It's a
 fairly boring purpose, not fancy at all. But it is a purpose, indeed
 *the* purpose.

The point Stephen is making is that it's just as easy, and far more
reliable, to dump the package-or-whatever-you-call-it by dumping the
definitions of the contained objects, as to dump it by dumping the text
blob it was originally created from.  So I don't see a lot of merit
to claiming that we need to keep the text blob for this purpose.

We did it differently for extensions in part because you can't dump a .so
as a SQL command, so dump-the-contained-objects wasn't going to be a
complete backup strategy in any case.  But for a package containing only
SQL objects, that's not a problem.

 We aim to have the simplest implementation that meets the stated need
 and reasonable extrapolations of that. Text in a catalog table is the
 simplest implementation. That is not a reason to reject it, especially
 when we aren't suggesting a viable alternative.

The first part of this assertion is debatable, and the claim that no
viable alternative has been suggested is outright wrong.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-17 Thread Simon Riggs
On 17 December 2013 23:42, Tom Lane t...@sss.pgh.pa.us wrote:

 We aim to have the simplest implementation that meets the stated need
 and reasonable extrapolations of that. Text in a catalog table is the
 simplest implementation. That is not a reason to reject it, especially
 when we aren't suggesting a viable alternative.

 The first part of this assertion is debatable, and the claim that no
 viable alternative has been suggested is outright wrong.

I just hadn't read about that myself. All I'd read was why this
feature should be blocked.

Sounds like we have a way forward for this feature then, just not with
the current patch.

Can someone attempt to summarise the way forward, with any caveats and
necessary restrictions? It would save further column inches of debate.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-16 Thread Josh Berkus
On 12/15/2013 10:47 PM, Jeff Davis wrote:
 The patch offers an alternative to dropping files on the filesystem
 before doing CREATE EXTENSION. Instead, if the extension has no C code,
 you can put it in the catalog using ordinary SQL access, and execute the
 same kind of CREATE EXTENSION. Aside from that, it's pretty much
 identical to existing extensions.

OK, so the idea is that for psql-only extensions (i.e. ones without
.so's) the user shouldn't be required to create a file on disk.  That
seems straightforwards and acceptable.

 Stephen doesn't like the idea that the SQL in an extension is a blob of
 text. 

I, personally, would prefer per-object line-items, but I don't think
that's a deal-breaker.  Having a single text blob does match up with
existing Extension design.

Note for Dimitri, though: I think that having line-item objects in
dependancy order would make this feature vastly more useful for
schema-template maintenance.  Give it some thought.

 There are weird cases, like if you make local modifications to
 objects held in an extension, then dump/reload will lose those local
 modifications.

What does DUMP/Reload do with regular Extensions currently in that case?

 Another issue, which I agree is dubious in many
 situations, is that the version of an extension is not preserved across
 dump/reload (this is actually a feature, which was designed with
 contrib-style extensions in mind, but can be surprising in other
 circumstances).

Well, this should work with a versioning system, in which dump/reload
can load older versions of the extension if they are present, the same
as external Extensions do now.  Is that built in?

 This isn't necessarily a dead-end, but there are a lot of unsettled
 issues, and it will take some soul-searching to answer them. Is an
 extension a blob of text with a version, that's maintained in some
 external repo?

Well, plus potentially binaries and library references, yes.  Although
you could describe all of Postgres as a bunch of text blobs and some
library references, when you get right down to it.

 Is it the job of postgres to ensure that dump/reload
 creates the same situation that you started with, including local
 modifications to objects that are part of an extension?

IMHO: No.  AFAIK, if a user modifies, say, information_schema views in
PostgreSQL, we don't respect that in dump/restore either.

Now, I can see adding to this mechanism a method for tracking such
modifications in a way that pgdump can support them.  But that can
easily be a version 2 feature.

 Should
 everything be an extension, or do we need to invent a new concept for
 some of the use cases? What role to external tools play in all of this?

So, the reason I was confused by this feature -- and the reason Stephen
hates it, I think -- is that I thought it was solving the Extensions
don't follow replication, and they are complicated to install if your OS
doesn't have good packages problem.  It's not, and it never will solve
that issue.

It's solving a completely different problem, to wit:

Some PostgreSQL shops with lots of servers have large internal libraries
of functions, views, and similar code that they've written to support
their applications, which don't comprise a complete database.  This
feature would allow them to package those libraries, and version,
upgrade and track them, without requiring a filesystem-based install.  I
myself have a couple clients who could benefit from this.

I think the name Extension Templates is horrible because it misleads
all of us on this list into thinking the proposed feature is completely
something other than what it is.  I don't have a better name offhand,
but that's got to change before it becomes a feature.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-16 Thread Josh Berkus
On 12/16/2013 10:53 AM, Josh Berkus wrote:
 Some PostgreSQL shops with lots of servers have large internal libraries
 of functions, views, and similar code that they've written to support
 their applications, which don't comprise a complete database.  This
 feature would allow them to package those libraries, and version,
 upgrade and track them, without requiring a filesystem-based install.  I
 myself have a couple clients who could benefit from this.

cc'ing Jim Nasby, since I think Enova is part of the target market for
this feature.  Jim, have you taken a look at this?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-16 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 So, the reason I was confused by this feature -- and the reason Stephen
 hates it, I think -- is that I thought it was solving the Extensions
 don't follow replication, and they are complicated to install if your OS
 doesn't have good packages problem.  It's not, and it never will solve
 that issue.

 It's solving a completely different problem, to wit:

 Some PostgreSQL shops with lots of servers have large internal libraries
 of functions, views, and similar code that they've written to support
 their applications, which don't comprise a complete database.  This
 feature would allow them to package those libraries, and version,
 upgrade and track them, without requiring a filesystem-based install.  I
 myself have a couple clients who could benefit from this.

Right.  I think a lot of the tension comes from people being unconvinced
that the existing extension feature is an ideal model for this sort of
use-case.  Extensions were mainly designed around the notion of a .so
with some SQL datatype/function/etc declarations that have to match up
with the C code.  So it made sense for them to be relatively static things
that live in the filesystem.  Notably, if you're migrating to a new PG
major version, you're at the very least gonna have to recompile the C code
and quite likely will need to change it some.  So adapting the SQL code
if needed goes along with that, and would never be automatic in any case.

OTOH, for a set of pure-SQL objects, it's not necessary that there be a
canonical text file somewhere, and we have in principle complete knowledge
of the objects' semantics as well as the ability to dump-and-restore into
newer PG versions.  So it's not at all clear that we should just adopt the
existing model with the smallest possible changes --- which AFAICS is
basically what this proposal is.  Maybe that's the way to go, but we
should consider alternatives, and in particular I think there is much
more reason to allow inside-the-database mutation of the SQL objects.

 I think the name Extension Templates is horrible because it misleads
 all of us on this list into thinking the proposed feature is completely
 something other than what it is.  I don't have a better name offhand,
 but that's got to change before it becomes a feature.

Given your previous para, I wonder if library or package would work
better.  I agree that template isn't le mot juste.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-16 Thread Jim Nasby

On 12/16/13, 1:00 PM, Josh Berkus wrote:

On 12/16/2013 10:53 AM, Josh Berkus wrote:

Some PostgreSQL shops with lots of servers have large internal libraries
of functions, views, and similar code that they've written to support
their applications, which don't comprise a complete database.  This
feature would allow them to package those libraries, and version,
upgrade and track them, without requiring a filesystem-based install.  I
myself have a couple clients who could benefit from this.


cc'ing Jim Nasby, since I think Enova is part of the target market for
this feature.  Jim, have you taken a look at this?


The name rings a bell; I think I looked at it in the past.

I've read all of this thread that I've currently got (back to 12/11), so I 
think I've got some idea what's going on.

Enova definitely has libraries of objects, and in fact we're currently working 
on releasing them via PGXN. That's proving a bit challenging since now we have 
to find a way to incorporate PGXN into our existing deployment framework (I do 
NOT want users to have to manually run pgxn client commands). Another 
complication is that we don't want our production servers downloading random, 
un-audited code, so we need an internal PGXN mirror.

I think it's probably best if I describe the issues that we've run across, to 
help the rest of the community understand the pain points. I'll work on doing 
that.

In the meantime, I can say this:

- Being forced to put files down for extensions is a PITA
- We don't have a good way to deal with extensions that have been installed in 
a non-standard schema, other than search_path, which for a complex database is 
impractical
- There's a lot that could potentially be done without any external libraries 
(we've got the equivalent of probably 6-8 modules, none of which require C and 
only one uses a Perl module (which is part of the module itself; the only 
reason for the .pm is to properly factor the code between plperl functions)
- We definitely need a mechanism for declaring deps between modules

Somewhat related to this, I really wish Postgres had the idea of a class, that was allowed to 
contain any type of object and could be instantiated when needed. For example, if we had an 
address class, we could instantiate it once for tracking our customer addresses, and a second 
time for tracking the addresses customers supply for their employers. Such a mechanism would probably be 
ideal for what we need, but of course you'd still have the question of how to load a class definition that 
someone else has published.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-15 Thread Jeff Davis
On Sat, 2013-12-14 at 13:46 -0800, Josh Berkus wrote:
 All:
 
 Can someone summarize the issues with this patch for those of us who
 haven't been following it closely?  I was just chatting with a couple
 other contributors, and at this point none of just know what it
 implements, what it doesn't implement, what the plans are for expanding
 its feature set (if any), and why Frost doesn't like it.  I tried
 reading through the thread on -hackers, and came away even more confused.
 
 Is there maybe a wiki page for it?

The patch offers an alternative to dropping files on the filesystem
before doing CREATE EXTENSION. Instead, if the extension has no C code,
you can put it in the catalog using ordinary SQL access, and execute the
same kind of CREATE EXTENSION. Aside from that, it's pretty much
identical to existing extensions.

Stephen doesn't like the idea that the SQL in an extension is a blob of
text. There are weird cases, like if you make local modifications to
objects held in an extension, then dump/reload will lose those local
modifications. Another issue, which I agree is dubious in many
situations, is that the version of an extension is not preserved across
dump/reload (this is actually a feature, which was designed with
contrib-style extensions in mind, but can be surprising in other
circumstances).

This isn't necessarily a dead-end, but there are a lot of unsettled
issues, and it will take some soul-searching to answer them. Is an
extension a blob of text with a version, that's maintained in some
external repo? Is it the job of postgres to ensure that dump/reload
creates the same situation that you started with, including local
modifications to objects that are part of an extension? Should
everything be an extension, or do we need to invent a new concept for
some of the use cases? What role to external tools play in all of this?

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-14 Thread Jeff Davis
On Fri, 2013-12-13 at 13:42 -0500, Stephen Frost wrote:
 * Jeff Davis (pg...@j-davis.com) wrote:
  For what it's worth, I think the idea of extension templates has good
  conceptual integrity. Extensions are external blobs. To make them work
  more smoothly in several ways, we move them into the catalog. They have
  pretty much the same upsides and downsides of our existing extensions,
  aside from issues directly related to filesystem vs. catalog.
 
 I've never particularly liked the idea that extensions are external
 blobs, to be honest.

It did feel a bit like you were arguing about extensions as they exist
today, rather than extension templates.

To make much more progress, it seems like we either need an ingenious
idea about how to change existing extensions to work for all purposes,
or we need to invent a new concept.

 but most of those cases also look to have external
 dependencies (eg: FDWs), which really makes me doubt this notion that
 they could be distributed independently and outside of the OS packaging
 system (or that it would be a particularly good idea to even try...).

As I pointed out before, many language communities handle libraries
outside of the OS packaging system, e.g. cpan, gems, cabal, pear, etc.
Arguably, those libraries are more likely to have external dependencies,
and yet they find it a better solution anyway.

And you are completely ignoring the common case where people are just
using C because *postgres says they have to*. For instance, defining new
data types, implementing the GiST/GIN/SP-GiST APIs, or using some C hook
in the backend. Those may have no external dependencies at all.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-14 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 To make much more progress, it seems like we either need an ingenious
 idea about how to change existing extensions to work for all purposes,
 or we need to invent a new concept.

I'm still in favor of supporting SQL/trusted-language only
'extensions' and allowing them to be installed via a client by a
non-superuser, with perhaps superusers having the ability to install
extensions which use interpreted but untrusted languages.

 As I pointed out before, many language communities handle libraries
 outside of the OS packaging system, e.g. cpan, gems, cabal, pear, etc.

Sure, and we have PGXN, which I believe fits quite a bit better into the
above list than extension templates.  Even so, we could look to try
and add in binary distribution capabilities to PGXN, though David seems
more interested in having a way to go from PGXN source to RPM or Debian
packages.  Note also that all of the above operate by downloading
something remote and then installing it *locally*- yet we're being asked
to build a system like that which allows installing to a remote system
through a database protocol.

 And you are completely ignoring the common case where people are just
 using C because *postgres says they have to*. For instance, defining new
 data types, implementing the GiST/GIN/SP-GiST APIs, or using some C hook
 in the backend. Those may have no external dependencies at all.

I don't intend to ignore that case- my earlier point was merely that
there seems to be a pretty close split between no C, C with external
dependencies, and C without external dependencies.  Based on what I saw
in PGXN, we've got a good bit of all three.  The only thing which has
even been proposed thus far to address all three cases is PGXN-
extension templates don't, nor do 'packages' or whatever we want to call
extensions without C code.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-14 Thread Josh Berkus
All:

Can someone summarize the issues with this patch for those of us who
haven't been following it closely?  I was just chatting with a couple
other contributors, and at this point none of just know what it
implements, what it doesn't implement, what the plans are for expanding
its feature set (if any), and why Frost doesn't like it.  I tried
reading through the thread on -hackers, and came away even more confused.

Is there maybe a wiki page for it?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-13 Thread Jeff Davis
On Wed, 2013-12-11 at 20:49 +0100, Dimitri Fontaine wrote:
 Robert Haas robertmh...@gmail.com writes:
  I strongly agree.  PostgreSQL has succeeded because we try not to do
  things at all until we're sure we know how to do them right.
 
 I still agree to the principle, or I wouldn't even try. Not in details,
 because the current design passed all the usual criteria a year ago.
 
   http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us

For what it's worth, I think the idea of extension templates has good
conceptual integrity. Extensions are external blobs. To make them work
more smoothly in several ways, we move them into the catalog. They have
pretty much the same upsides and downsides of our existing extensions,
aside from issues directly related to filesystem vs. catalog.

Stephen had some legitimate concerns. I don't entirely agree, but they
are legitimate concerns, and we don't want to just override them.

At the same time, I'm skeptical of the alternatives Stephen offered
(though I don't think he intended them as a full proposal).

So right now I'm discouraged about the whole idea of installing
extensions using SQL. I don't see a lot of great options. On top of
that, the inability to handle native code limits the number of
extensions that could make use of such a facility, which dampens my
enthusiasm.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-13 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 For what it's worth, I think the idea of extension templates has good
 conceptual integrity. Extensions are external blobs. To make them work
 more smoothly in several ways, we move them into the catalog. They have
 pretty much the same upsides and downsides of our existing extensions,
 aside from issues directly related to filesystem vs. catalog.

I've never particularly liked the idea that extensions are external
blobs, to be honest.

 Stephen had some legitimate concerns. I don't entirely agree, but they
 are legitimate concerns, and we don't want to just override them.
 
 At the same time, I'm skeptical of the alternatives Stephen offered
 (though I don't think he intended them as a full proposal).

It was more thoughts on how I'd expect these things to work.  I've also
been talking to David about what he'd like to see done with PGXN and his
thinking was a way to automate creating RPMs and DEBs based on PGXN spec
files, but he points out that the challenge there is dealing with
external dependencies.

 So right now I'm discouraged about the whole idea of installing
 extensions using SQL. I don't see a lot of great options. On top of
 that, the inability to handle native code limits the number of
 extensions that could make use of such a facility, which dampens my
 enthusiasm.

Yeah, having looked at PGXN, it turns out that some 80+% of the
extensions there have .c code included, something well beyond what I was
expecting, but most of those cases also look to have external
dependencies (eg: FDWs), which really makes me doubt this notion that
they could be distributed independently and outside of the OS packaging
system (or that it would be a particularly good idea to even try...).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-13 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 Stephen had some legitimate concerns. I don't entirely agree, but they
 are legitimate concerns, and we don't want to just override them.

The main disturbing concern for me is to do with pg_restore and major
upgrades, where the blob we have in the catalogs might not parse
correctly into the new version.

Of course, it's easy enough to fix either the source database or the
pg_restore text itself, as it's just plain SQL in there.

 At the same time, I'm skeptical of the alternatives Stephen offered
 (though I don't think he intended them as a full proposal).

I began working out a full proposal out of them, and here's the most
important conclusions I can offer from that work:

  - The main constraint we have to work against is that no matter what,
extension objects are not going to be selected for pg_dump.

That basically means that the only model of extensions we accept is
ever going to be the contrib model, whereas my current attemps are
meant to evolve the extension model way beyond contrib.

The tension we have there is extremely hard to resolve, which
explains the strange idea of storing SQL blobs in the catalogs.

  - While it's possible to work out some new versioned container
objects, I see mainly 3 consequences of doing so:

 1. We're going to kill extensions, which design would be limited to
only care about contribs: no data, code in C, any single version
of the extension is intended to work against only one major
version of PostgreSQL.

Guess what, I know of no extension maintained by those rules
outside of contribs.

The third rule is the easy one to work around, of course, except
if you consider the pg_restore behaviour, framed as a bug by
Stephen.

 2. The new thing would seamlessly allow for data only extensions,
such as census or geolocation, etc, because we would actually
backup that data.

 3. The separation of concerns in between the extension author who
maintains the install and upgrade scripts and the DBA who
applies them is going to be pushed entirely to the client
software, and that sounds way more fragile that what we have
now.

So I see 3 solutions ahead of us: we allow extensions to manage more
than the contrib model, a third-party software is going to work around
it to make extensions usable for non-contrib things, or we're providing
a new kind of container in core and kill extensions.

Of course, none of them are exclusive, and I think realistically we're
going to have to live with at least 2 of those alternatives in a near
future.

 It was more thoughts on how I'd expect these things to work.  I've also
 been talking to David about what he'd like to see done with PGXN and his
 thinking was a way to automate creating RPMs and DEBs based on PGXN spec
 files, but he points out that the challenge there is dealing with
 external dependencies.

With all due respect, we don't live in a world where its customary to
have root privileges on your production service anymore.

 So right now I'm discouraged about the whole idea of installing
 extensions using SQL. I don't see a lot of great options. On top of
 that, the inability to handle native code limits the number of
 extensions that could make use of such a facility, which dampens my
 enthusiasm.

Rejoice! Have a look at the documentation for dynamic_library_path. Any
distribution or packaging software would trivially be able to make it so
that the modules (.so) are loaded from a non-system place.

Only missing is another path GUC to allow PostgreSQL to search for the
extension control files in non-system places too, meanwhile it's already
possible to edit the file system privileges.

 Yeah, having looked at PGXN, it turns out that some 80+% of the
 extensions there have .c code included, something well beyond what I was

I call that a chicken and eggs problem.

Every serious PostgreSQL user will have stored procedure code to
maintain, where “serious” means that PostgreSQL is considered a part of
the application platform.

The only reason why this code is not organized as an extension today is
because extensions are restricted to the contrib model. There's
absolutely no surprise in discovering that current extensions in the
wild are about all fitting the only currently supported model.

 expecting, but most of those cases also look to have external
 dependencies (eg: FDWs), which really makes me doubt this notion that
 they could be distributed independently and outside of the OS packaging
 system (or that it would be a particularly good idea to even try...).

It seems to me that if dynamic_library_path and the system dynamic
loader are both looking at the same places, then storing modules and
their dependencies there is going to be all you need to make it work.

The main packaging system (debian and red hat) have automatic dependency
tracking 

Re: [HACKERS] Extension Templates S03E11

2013-12-11 Thread Dimitri Fontaine
Hi,

Stephen Frost sfr...@snowman.net writes:
 * Jeff Davis (pg...@j-davis.com) wrote:
 What is stopping Extension Templates, as proposed, from being this
 special extension creation mode? What would be a better design?

 The extra catalog tables which store SQL scripts in text columns is one
 of my main objections to the as-proposed Extension Templates.  I view
 those scripts as a poor man's definition of database objects which are
 defined properly in the catalog already.

I have a very hard time to understand this objection.

PL/SQL functions are just a SQL script stored as-is in the catalogs.
That applies the same way to any other PL language too, with scripts
stored as-is in the catalogs in different languages.

Even views are stored in a textual way in the catalogs, albeit in a
specific pre-processed format, it's still a text blob that could pass
for a script in a backend specific language, parsed by the rewriter.

So while I hear your objection to the script in catalog idea Stephen,
I think we should move forward. We don't have the luxury of only
applying patches where no compromise has to be made, where everyone is
fully happy with the solution we find as a community.

  The other big issue is that
 there isn't an easy way to see how we could open up the ability to
 create extensions to non-superusers with this approach.

The main proposal here is to only allow the owner of a template to
install it as an extension. For superusers, we can implement the needed
SET ROLE command automatically in the CREATE EXTENSION command.

Is there another security issue that this “same role” approach is not
solving? I don't think so.

 It seems like the porting issue is just a matter of finding someone to
 write a tool to reliably translate packages from PGXN into a form
 suitable to be sent using SQL commands; which we would need anyway for
 this special mode.

I already mentionned that's on my roadmap, part of the vision I'm trying
to implement here. My goal is to deliver the full solution for 9.4, and
this Extension Templates facility is the missing in-core bits of it.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-11 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Stephen Frost sfr...@snowman.net writes:
 The extra catalog tables which store SQL scripts in text columns is one
 of my main objections to the as-proposed Extension Templates.  I view
 those scripts as a poor man's definition of database objects which are
 defined properly in the catalog already.

 I have a very hard time to understand this objection.

Why?  I think it has considerable force.

 PL/SQL functions are just a SQL script stored as-is in the catalogs.

Those are the exception not the rule.

 Even views are stored in a textual way in the catalogs, albeit in a
 specific pre-processed format,

This is utter nonsense.  That representation has nothing to do with
the original text of the CREATE VIEW command, and the fact that we
store it as an ASCII string rather than some binary blob has more to do
with debuggability than anything else.  The important point is that we
can (and sometimes do) transform the view to something else based
on semantic understanding of what's in it.  And we also have the ability
to figure out what the view depends on, something that is mighty hard
to get out of a text blob.  (The fact that we don't have that for SQL
functions is a serious minus of our approach to functions.)

Stephen is concerned that a pure textual representation lacks any deep
semantic understanding of what's in the extension, and I think that's
indeed something to be concerned about.  It's perhaps not a 100% show
stopper, but it's something to be avoided unless the benefits of
storing pure text are overwhelming.  Which you don't seem to have
convinced people of.

 So while I hear your objection to the script in catalog idea Stephen,
 I think we should move forward. We don't have the luxury of only
 applying patches where no compromise has to be made, where everyone is
 fully happy with the solution we find as a community.

You've got that backwards.  We do have the luxury of rejecting new
features until people are generally satisfied that the basic design is
right.  There's no overlord decreeing that this must be in 9.4.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-11 Thread Robert Haas
On Wed, Dec 11, 2013 at 10:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 So while I hear your objection to the script in catalog idea Stephen,
 I think we should move forward. We don't have the luxury of only
 applying patches where no compromise has to be made, where everyone is
 fully happy with the solution we find as a community.

 You've got that backwards.  We do have the luxury of rejecting new
 features until people are generally satisfied that the basic design is
 right.  There's no overlord decreeing that this must be in 9.4.

I strongly agree.  PostgreSQL has succeeded because we try not to do
things at all until we're sure we know how to do them right.
Sometimes we lag behind in features or performance as a result of that
- but the upside is that when we say something now works, it does.
Moreover, it means that the number of bad design decisions we're left
to support off into eternity is comparatively small.  Those things are
of great benefit to our community.

I can certainly understand Dimitri's frustration, in that he's written
several versions of this patch and none have been accepted.  But what
that means is that none of those approaches have consensus behind
them, which is another way of saying that, as a community, we really
*don't* know the best way to solve this problem, and our community
policy in that situation is to take no action until we do.  I've
certainly had my own share of disappointments about patches I've
written which I believed, and in some cases still believe, to be
really good work, and I'd really like to be able to force them
through.  But that's not how it works.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-11 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 You've got that backwards.  We do have the luxury of rejecting new
 features until people are generally satisfied that the basic design is
 right.  There's no overlord decreeing that this must be in 9.4.

 I strongly agree.  PostgreSQL has succeeded because we try not to do
 things at all until we're sure we know how to do them right.

I still agree to the principle, or I wouldn't even try. Not in details,
because the current design passed all the usual criteria a year ago.

  http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us

 I can certainly understand Dimitri's frustration, in that he's written
 several versions of this patch and none have been accepted.  But what

The design was accepted, last year. It took a year to review it, which
is fair enough, only to find new problems again. Circles at their best.
You just said on another thread that perfect is the enemy of good. What
about applying the same line of thoughts to this patch?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-11 Thread Stephen Frost
Dimitri,

* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  The extra catalog tables which store SQL scripts in text columns is one
  of my main objections to the as-proposed Extension Templates.  I view
  those scripts as a poor man's definition of database objects which are
  defined properly in the catalog already.
 
 I have a very hard time to understand this objection.
 
 PL/SQL functions are just a SQL script stored as-is in the catalogs.
 That applies the same way to any other PL language too, with scripts
 stored as-is in the catalogs in different languages.

Sure- but in those cases only the actual function (which is, by
definition, for an *interpreted* language..) is stored as text, not
the definition of the function (eg: the CREATE FUNCTION statement), nor
all of the metadata, dependency information, etc.  Also, what you're
proposing would result in having *both* in the same catalog- the
canonical form defined in pg_proc and friends, and the SQL text blob in
the extension template catalog and I simply do not see value in that.

 So while I hear your objection to the script in catalog idea Stephen,
 I think we should move forward. We don't have the luxury of only
 applying patches where no compromise has to be made, where everyone is
 fully happy with the solution we find as a community.

I understand that you wish to push this forward regardless of anyone's
concerns.  While I appreciate your frustration and the time you've spent
on this, that isn't going to change my opinion of this approach.

   The other big issue is that
  there isn't an easy way to see how we could open up the ability to
  create extensions to non-superusers with this approach.
 
 The main proposal here is to only allow the owner of a template to
 install it as an extension. For superusers, we can implement the needed
 SET ROLE command automatically in the CREATE EXTENSION command.
 
 Is there another security issue that this “same role” approach is not
 solving? I don't think so.

This isn't kind, and for that I'm sorry, but this feels, to me, like a
very hand-wavey well, I think this would solve all the problems answer
to the concerns raised.  I can't answer offhand if this would really
solve all of the issues because I've not tried to implement it or test
it out, but I tend to doubt that it would.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-11 Thread Robert Haas
On Wed, Dec 11, 2013 at 2:49 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Robert Haas robertmh...@gmail.com writes:
 You've got that backwards.  We do have the luxury of rejecting new
 features until people are generally satisfied that the basic design is
 right.  There's no overlord decreeing that this must be in 9.4.

 I strongly agree.  PostgreSQL has succeeded because we try not to do
 things at all until we're sure we know how to do them right.

 I still agree to the principle, or I wouldn't even try. Not in details,
 because the current design passed all the usual criteria a year ago.

   http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us

 I can certainly understand Dimitri's frustration, in that he's written
 several versions of this patch and none have been accepted.  But what

 The design was accepted, last year. It took a year to review it, which
 is fair enough, only to find new problems again. Circles at their best.
 You just said on another thread that perfect is the enemy of good. What
 about applying the same line of thoughts to this patch?

Sure.  For every patch that gets submitted, we have to decide whether
it represents an improvement over where we are today, or not.  For the
record:

1. The patch you're talking about is 2 or 3 orders of magnitude less
complicated than this one, and it is pretty easy to see that it will
not paint us into a corner.  It also happens to fix what I've long
considered a deficiency in PostgreSQL.  I think it is legitimate for
me to have more confidence in that patch than this one.

2. I explicitly did not review this patch for the precise reason that
I thought it needed a fresh pair of eyes.  You and I have not always
seen eye to eye on this and other extension-related patches, and I
don't really want to be the guy who shoots down your patches every
year.  I was prepared to sit still for this if Stephen felt it was OK.
 But after both Stephen and Heikki expressed concerns about the
approach, I decided to say that I found those concerns valid also.

I happen to think that Stephen did a very good job of explaining why
blobs in the catalog could be a very bad plan.  Specifically, I
believe he mentioned that it creates a dump/restore hazard.  If a new
keyword gets added in a new server version, a logical dump of the
extension made by a new pg_dump against the old server version will
restore properly on the new version.  Dumping and restoring the blobs
and re-execute on the new version may fail.  I had not thought of this
issue when this was last discussed, or at least I don't remember
having thought of it, and based on Tom's having endorsed the previous
design, I'm guessing he didn't think of it at the time, either.

I think that Stephen's other points about duplicating data, etc. are
somewhat valid as well, but I am particularly sensitive about dump and
restore hazards.  I don't think you will find any time in the history
of this project when I endorsed any change that would create more of
them or declined to endorse fixing them, and if you do it was probably
in my first year of involvement when I did not understand so well as I
do now how much pain such problems create.  Users are remarkably
skilled at finding these bugs; it's been less then two weeks since we
fixed the most recent one; and they cause a lot of pan.  The only
saving grace is that, up until now, we've pretty much always been able
to patch them by changing pg_dump(all).  The problems that this patch
would create can't be fixed that way, though: you'd have to manually
hack up the blobs stored in the catalog, or manually edit the
dumpfile.  That's not good.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-09 Thread Robert Haas
On Sat, Dec 7, 2013 at 3:12 AM, Jeff Davis pg...@j-davis.com wrote:
 So if we do it this way, then we should pick a new name, like package.

That was my first reaction as well, when I looked at this a few years
ago, but I've since backed away from that position.  You're certainly
correct that it's awkward to have a single kind of object that behaves
in two radically different ways, but it's also pretty awkward to have
the same stuff installed as one of two completely different types of
objects depending on who installed it and how.

If we're targeting deployment of user-written application code, then I
can see that it might make sense to have a different concept than
extension for that, because arguably it's a different problem,
though it's no longer clear to me that it's all that much different.
But if we're talking about deployment of the same PGXN code (or
wherever upstream lives) either by a DBA who is also the sysadmin (and
can thus run make install or yum install) or one who is not (and thus
wishes to proceed entirely via libpq) then making those two different
concepts seems like it might be slicing awfully thin.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-09 Thread Jeff Davis
On Mon, 2013-12-09 at 12:17 -0500, Robert Haas wrote:
 On Sat, Dec 7, 2013 at 3:12 AM, Jeff Davis pg...@j-davis.com wrote:
  So if we do it this way, then we should pick a new name, like package.
 
 That was my first reaction as well, when I looked at this a few years
 ago, but I've since backed away from that position.  You're certainly
 correct that it's awkward to have a single kind of object that behaves
 in two radically different ways, but it's also pretty awkward to have
 the same stuff installed as one of two completely different types of
 objects depending on who installed it and how.

I think awkwardness is most visible in the resulting documentation and
error messages.

At the moment, I'm having a difficult time imagining how we explain how
this works to users (or, when they make a mistake or don't get the
results they expect, explain to them what they did wrong and how to fix
it).

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-07 Thread Jeff Davis
On Wed, 2013-12-04 at 14:54 -0500, Tom Lane wrote:
 I think Stephen has already argued why it could be a good idea here.
 But in a nutshell: it seems like there are two use-cases to be
 supported, one where you want CREATE EXTENSION hstore to give you
 some appropriate version of hstore, and one where you want to restore
 exactly what you had on the previous installation.  It seems to me that
 exploding the extension by dumping, rather than suppressing, its
 component objects is by far the most reliable way of accomplishing the
 latter.

The behavior of an extension should not depend on how it was installed.

The kind of extension being described by Stephen will:

* Not be updatable by doing ALTER EXTENSION foo UPDATE TO '2.0'
* Dump out objects that wouldn't be dumped if they had installed the
extension using the filesystem

So if we do it this way, then we should pick a new name, like package.

And then we'll need to decide whether it still makes sense to use an
external tool to transform a PGXN extension into a form that could be
loaded as a package.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-07 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 On Wed, 2013-12-04 at 14:54 -0500, Tom Lane wrote:
  I think Stephen has already argued why it could be a good idea here.
  But in a nutshell: it seems like there are two use-cases to be
  supported, one where you want CREATE EXTENSION hstore to give you
  some appropriate version of hstore, and one where you want to restore
  exactly what you had on the previous installation.  It seems to me that
  exploding the extension by dumping, rather than suppressing, its
  component objects is by far the most reliable way of accomplishing the
  latter.
 
 The behavior of an extension should not depend on how it was installed.
 
 The kind of extension being described by Stephen will:
 
 * Not be updatable by doing ALTER EXTENSION foo UPDATE TO '2.0'

That's correct, but consider when the above command actually works
today: when the new version is magically made available to the backend
without any action happening in PG.  That works when the filesystem can
be updated independently or the backend can reach out to some other
place and pull things down but that's, really, a pretty special
situation.  It works today specifically because we expect the OS
packaging system to make changes to the filesystem for us but this whole
'extension template' proposal is about getting away from the filesystem.

Instead, I'm suggesting an external tool which can pull down the new
version from an external repo and then apply it to the backend.
Clearly, my approach supports the general action of updating an
extension, it just doesn't expect the filesystem on the server to be
changed underneath PG nor that PG will reach out to some external
repository on the basis of a non-superuser request to get the update
script.

 * Dump out objects that wouldn't be dumped if they had installed the
 extension using the filesystem

Correct, but the general presumption here is that many of these
extensions wouldn't even be available for installation on the
filesystem anyway.

 So if we do it this way, then we should pick a new name, like package.

I've been on the fence about this for a while.  There's definitely pros
and cons to consider but it would go very much against one of the goals
here, which is to avoid asking extension authors (or their users, to
some extent..) to change and I expect it'd also be a lot more work to
invent something which is 90% the same as extensions.  Perhaps there's
no help for it and we'll need extensions which are essentially for
OS managed extensions (which can include .so files and friends) and then
packages for entirely-in-catalog sets of objects with a certain amount
of metadata included (version and the like).

 And then we'll need to decide whether it still makes sense to use an
 external tool to transform a PGXN extension into a form that could be
 loaded as a package.

I'd certainly think it would be but if we're moving away from calling
them extensions then I'm afraid extension authors and users would end up
having to change something anyway, no matter the tool.  Perhaps that's
reasonable and we can at least minimize the impact but much of what
extensions offer are, in fact, what's also needed for packages and I'm
not thrilled with the apparent duplication.

It just occured to me that perhaps we can call these something
different towards the end user but use the existing catalogs and code
for extensions to handle our representation, with a few minor tweaks..
Not sure if I like that idea, but it's a thought.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-07 Thread Jeff Davis
On Sat, 2013-12-07 at 12:27 -0500, Stephen Frost wrote:
 * Jeff Davis (pg...@j-davis.com) wrote:
  The behavior of an extension should not depend on how it was installed.
  
  The kind of extension being described by Stephen will:
  
  * Not be updatable by doing ALTER EXTENSION foo UPDATE TO '2.0'
 
  ... [ reason ] ...

  * Dump out objects that wouldn't be dumped if they had installed the
  extension using the filesystem
  ... [ reason ] ...

I understand there are reasons, but I'm having a hard time getting past
the idea that I have extension foo v1.2 now needs to be qualified with
installed using SQL or installed using the filesystem to know what
you actually have and how it will behave.


Stepping back, maybe we need to do some more research on existing
SQL-only extensions. We might be over-thinking this. How many extensions
are really just a collection of functions on existing types? If you
define a new data type, almost all of the functions seem to revolve
around C code -- not just to define the basic data type, but also the
GiST support routines, which then mean you're implementing operators in
C too, etc.

Perhaps we should first focus on making SQL-only extensions more useful?

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-07 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 I understand there are reasons, but I'm having a hard time getting past
 the idea that I have extension foo v1.2 now needs to be qualified with
 installed using SQL or installed using the filesystem to know what
 you actually have and how it will behave.

I can certainly understand that.

 Stepping back, maybe we need to do some more research on existing
 SQL-only extensions. We might be over-thinking this. How many extensions
 are really just a collection of functions on existing types? If you
 define a new data type, almost all of the functions seem to revolve
 around C code -- not just to define the basic data type, but also the
 GiST support routines, which then mean you're implementing operators in
 C too, etc.

Fair point- I'll try and find time to review the 100+ extensions on PGXN
and classify them (unless someone else would like to or happens to
already know..?).  That said, there's certainly cases where people find
the existing extension system stinks for their SQL-only code and
therefore don't use it- which is exactly the case I'm in @work.  We have
a ton of pl/pgsql code (along with schema definitions and the like), our
own build system which builds both 'full/new' databases based on a
certain version *and* will verify that newly built == current version
plus a hand-written update script (of course, we have to write that
update script) with versioned releases, etc.

Point simply being that, were extensions more generally useful, we might
see more of them and we need to consider more than just what's in PGXN,
and therefore already built as an extension, today.

 Perhaps we should first focus on making SQL-only extensions more useful?

I'm not following what you're suggesting here..?  In general, I agree;
do you have specific ideas about how to do that?  Ones which don't
involve a superuser or modifying the filesystem under PG, and which
works with replication and backups..?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-05 Thread Peter Eisentraut
On 12/1/13, 10:47 PM, Stephen Frost wrote:
 Having a management system for sets of objects is a *great* idea- and
 one which we already have through schemas.  What we don't have is any
 kind of versioning system built-in or other metadata about it, nor do we
 have good tooling which leverages such a versioning or similar system.
 Extensions provide some of that metadata around schemas and object
 definitions,

Schemas can't manage objects that are not in schemas, so that won't work.

It would be great if we could take the dependency tracking mechanism in
extensions and expose it separately.  I would like to be able to say

START PACKAGE foo  -- bad name

bunch of DDL

STOP PACKAGE

use it, later

DROP PACKAGE foo;


This mechanism already exists in extensions, but it's combined with a
bunch of other things.  Separating those things (and naming them
separately) might clear a few things up.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Jeff Davis
On Tue, 2013-12-03 at 08:44 -0500, Stephen Frost wrote:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
   On 3 December 2013 02:02, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
   ISTM that the real solution to this particular problem is to decouple
   the extensions that are currently in contrib from a specific postgres
   version.
  
  Problem?  It's not a bug that you get hstore 1.2 when you dump from 9.2
  and reload into 9.3; that's a feature.  You wanted an upgrade, presumably,
 
 I don't buy this argument at *all* and it's not going to fly when we've
 got multiple versions of an extension available concurrently.

It seems there's a use case for both behaviors; perhaps we should
include it in the control information?

  preserve_version_on_dump (boolean)

FWIW, I find the current behavior surprising when in the mindset of a
SQL extension. But it makes sense for things more closely tied to the
backend.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Jeff Davis
On Mon, 2013-12-02 at 15:44 -0500, Stephen Frost wrote:
 How are we going to handle new keywords
 being added in new major versions?  A pg_dump of the extension template
 script is then going to be loaded into the new major version but will
 not actually be able to be run because it'll error out...

Elsewhere in the thread you argued that the version of an extension
should be preserved across dump/reload. Surely a given version of the
extension corresponds to a specific set of SQL commands (specifically,
the SQL text blob on PGXN), so it *should* error out.

Otherwise you end up with a weird situation where upgrading a 9.4
install to 9.5 allows you to keep version 1.2 of some extension, but 1.2
won't install directly to 9.5. (By the way, I think this is a problem
with pg_upgrade currently.)

You're fighting pretty hard against text blobs, but at the same time
saying that we should be able to fully make use of existing PGXN
extensions, which contain text blobs of SQL. And extension authors are
versioning their SQL blobs, not some abstract concepts internal to
postgres and the catalogs.

Just because we start with blobs from PGXN doesn't mean we need to use
blobs everywhere; but I think you're too quick to rule them out.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Jeff Davis
On Tue, 2013-12-03 at 10:23 -0500, Robert Haas wrote:
 In more normal cases, however, the system can (and probably should)
 figure out what was intended by choosing the *shortest* path to get to
 the intended version.  For example, if someone ships 1.0, 1.0--1.1,
 1.1, and 1.1--1.2, the system should choose to run 1.1 and then
 1.1--1.2, not 1.0 and then 1.0--1.1 and then 1.1--1.2.  But that can
 be automatic: only if there are two paths of equal length (as in the
 example in the previous paragraph) do we need help from the user to
 figure out what to do.

Why do we need help from the user? Just pick a path.

For an extension update, I understand why someone wouldn't want to
accidentally downgrade 5 versions (dropping all of their dependent
objects) before updating to the latest. But this doesn't apply to
creation.

And it just seems really awkward to document, and it's a constant
maintenance burden on extension authors to specify their upgrade paths
every time they release a new version.

 Putting all that together, I'm inclined to suggest that what we really
 need is a LIST of version numbers, rather than just one.  If there one
 path to the version we're installing is shorter than any other, we
 choose that, period.  If there are multiple paths of equal length, we
 break the tie by choosing which version number appears first in the
 aforementioned list.  If that still doesn't break the tie, either
 because none of the starting points are mentioned in that list or
 because there are multiple equal-length paths starting in the same
 place, we give up and emit an error.

That seems like extreme overkill, and still doesn't give users full
control over upgrade paths.

Regards,
Jeff Davis





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 On Tue, 2013-12-03 at 14:31 -0500, Tom Lane wrote:
  Stephen Frost sfr...@snowman.net writes:
   When it comes to dump/reload, I'd much rather see a mechanism which uses
   our deep understanding of the extension's objects (as database objects)
   to implement the dump/reload than a text blob which is carried forward
   from major version to major version and may even fail to run.
  
  Note that we're already doing that in the binary_upgrade code path.
  I agree that generalizing that approach sounds like a better idea
  than keeping a text blob around.
 
 So does this take us fully back to Inline Extensions, or is there a
 distinction that I'm missing?

I've not really looked at the inline extensions patch/proposal in depth,
but I do think that's a lot closer than this.  As I understand it,
Dimitri had a patch for this, though what I've found is the blog post.
Also, there were a lot of discussions about the idea a year or so ago,
including folks who haven't spoken up on this discussion.

 I still don't see that Extension Templates are all bad:
   * They preserve the fact that two instances of the same extension
 (e.g. in different databases) were created from the same template.

This is only true if we change the extension templates to be shared
catalogs, which they aren't today..

   * They mirror the file-based templates, so it seems easier to get
 consistent behavior.

While it might seem easier and perhaps simpler, I'm not sure that I
really buy into the idea that we'd actually be more consistent.  Even if
we are, I'm not convinced that's what we want here..  The only thing
driving us in that direction is that we're calling these 'extensions'
too.  While I don't want five different extension-like things, I'd
rather use a different name from 'extensions' if we feel that the
differences between catalog-only extensions and filesystem extensions
will cause 'extensions' overall to have terribly inconsistent behavior.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 On Mon, 2013-12-02 at 15:44 -0500, Stephen Frost wrote:
  How are we going to handle new keywords
  being added in new major versions?  A pg_dump of the extension template
  script is then going to be loaded into the new major version but will
  not actually be able to be run because it'll error out...
 
 Elsewhere in the thread you argued that the version of an extension
 should be preserved across dump/reload. Surely a given version of the
 extension corresponds to a specific set of SQL commands (specifically,
 the SQL text blob on PGXN), so it *should* error out.

I *do* think the version should be preserved (though I admit that the
argument about things released with PG does make some sense).  My point
above is that if we dump the text blob out and then just rerun it, it
might not work, but if we use pg_dump and dump the extension out as
database objects (using the newer version of pg_dump, as we always
recommend..), then it's certainly more likely to work even in the face
of new keywords and the like which change between releases.

 Otherwise you end up with a weird situation where upgrading a 9.4
 install to 9.5 allows you to keep version 1.2 of some extension, but 1.2
 won't install directly to 9.5. (By the way, I think this is a problem
 with pg_upgrade currently.)

Hmm.  I'll grant, that's an interesting situation to consider, but I'm
trying to figure out why it's better to make it always break, both on
initial installation and when doing a restore from a backup, than only
have it (most likely anyway) break on initial installation (to a newer
version that may not have existed originally).

 You're fighting pretty hard against text blobs, but at the same time
 saying that we should be able to fully make use of existing PGXN
 extensions, which contain text blobs of SQL. And extension authors are
 versioning their SQL blobs, not some abstract concepts internal to
 postgres and the catalogs.

I don't want text blobs in the backend catalogs.  I'm not argueing
against text blobs in general (that'd be kinda hard to do..).

 Just because we start with blobs from PGXN doesn't mean we need to use
 blobs everywhere; but I think you're too quick to rule them out.

Perhaps, but I really don't see the point of putting a text blob into
the database for a set of objects that we're just going to create in the
next moment.  That would be, from my point of view anyway, akin to
storing 'CREATE TABLE' statements in the catalog next to the actual
definition of the table in pg_class/pg_attribute.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Jeff Davis
On Wed, 2013-12-04 at 09:50 -0500, Stephen Frost wrote:
  I still don't see that Extension Templates are all bad:
* They preserve the fact that two instances of the same extension
  (e.g. in different databases) were created from the same template.
 
 This is only true if we change the extension templates to be shared
 catalogs, which they aren't today..

I agree with you about that -- I don't like per-DB templates.

I guess the challenge is that we might want to use namespaces to support
user-installable extensions, and namespaces reside within a DB. But I
think we can find some other solution there (e.g. user names rather than
schemas), and per-DB templates are just not a good solution anyway.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Jeff Davis
On Tue, 2013-12-03 at 14:31 -0500, Tom Lane wrote:
 Stephen Frost sfr...@snowman.net writes:
  When it comes to dump/reload, I'd much rather see a mechanism which uses
  our deep understanding of the extension's objects (as database objects)
  to implement the dump/reload than a text blob which is carried forward
  from major version to major version and may even fail to run.
 
 Note that we're already doing that in the binary_upgrade code path.
 I agree that generalizing that approach sounds like a better idea
 than keeping a text blob around.

The reason for doing it that way in pg_upgrade was to preserve OIDs for
types, etc.:

http://www.postgresql.org/message-id/20783.1297184...@sss.pgh.pa.us

That doesn't seem to apply to ordinary dump/reload. Do you think it's
good for other reasons, as well?

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Tue, 2013-12-03 at 14:31 -0500, Tom Lane wrote:
 Stephen Frost sfr...@snowman.net writes:
 When it comes to dump/reload, I'd much rather see a mechanism which uses
 our deep understanding of the extension's objects (as database objects)
 to implement the dump/reload than a text blob which is carried forward
 from major version to major version and may even fail to run.

 Note that we're already doing that in the binary_upgrade code path.
 I agree that generalizing that approach sounds like a better idea
 than keeping a text blob around.

 The reason for doing it that way in pg_upgrade was to preserve OIDs for
 types, etc.:

That was *a* reason, but not the only one, I believe.

 That doesn't seem to apply to ordinary dump/reload. Do you think it's
 good for other reasons, as well?

I think Stephen has already argued why it could be a good idea here.
But in a nutshell: it seems like there are two use-cases to be
supported, one where you want CREATE EXTENSION hstore to give you
some appropriate version of hstore, and one where you want to restore
exactly what you had on the previous installation.  It seems to me that
exploding the extension by dumping, rather than suppressing, its
component objects is by far the most reliable way of accomplishing the
latter.  To point out just one reason why, we've never made any effort
to prohibit suitably-privileged users from modifying the objects within
an extension.  So even if you'd kept around the originally defining
text string, it might not represent current reality.  And as for relying
on some URL or other --- whoever proposed that doesn't live in the same
internet I do.  URLs aren't immutable, even on days when you can get to
them.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Robert Haas
On Tue, Dec 3, 2013 at 11:44 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 We should also consider the possibility of a user trying to
 deliberately install and older release.  For example, if the user has
 1.0, 1.0--1.1, 1.1, 1.1--1.2, and 1.2--1.0 (a downgrade script) with
 default_full_version = 1.2, an attempt to install 1.0 should run just
 the 1.0 script, NOT 1.2 and then 1.2--1.0.

 In what I did, if you want version 1.0 and we have a script --1.0.sql
 around, then we just use that script, never kicking the path chooser.

Oh, right.  Duh.  Sorry, bad example.  I do think we want to avoid
using a downgrade script as part of an install though - and to install
from the newest possible full version (I kind of like the term base
version) whenever possible.

 break the tie by choosing which version number appears first in the
 aforementioned list.  If that still doesn't break the tie, either
 because none of the starting points are mentioned in that list or
 because there are multiple equal-length paths starting in the same
 place, we give up and emit an error.

 Jeff also did mention about tiebreakers without entering into any level
 of details.

 We won't be able to just use default_version as the tiebreaker list
 here, because of the following example:

   default_version = 1.2, 1.0

   create extension foo version '1.1';

 With such a setup it would prefer 1.2--1.1 to 1.0--1.1, which doesn't
 look like what we want. Instead, we want

   default_version = 1.2
   create_from_version_candidates = 1.0

   create extension foo version '1.1';

 Then the tie breaker is the 1.0 in create_from_version_candidates so
 we would run foo--1.0.sql and then foo--1.0--1.1.sql.

I guess one way to skin this cat would be to just let the user provide
an ordering for the versions i.e.

version_ordering = 1.0 1.1 1.2

When the user asks for version X, we reject any paths that pass
through a newer version (so that we never downgrade), and start with
the path that begins as close to the target version as possible.  For
scenarios were people might be installing either an older or newer
version, that might be easier to understand than a base-version
preference list.

 Baring objections, I'm going to prepare a new branch to support
 developping that behavior against only file based extensions, and submit
 a spin-off patch to the current CF entry.

Not totally sure we're all on the same page yet, but that's not
necessarily meant to dissuade you.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Robert Haas
On Wed, Dec 4, 2013 at 3:39 AM, Jeff Davis pg...@j-davis.com wrote:
 On Tue, 2013-12-03 at 10:23 -0500, Robert Haas wrote:
 In more normal cases, however, the system can (and probably should)
 figure out what was intended by choosing the *shortest* path to get to
 the intended version.  For example, if someone ships 1.0, 1.0--1.1,
 1.1, and 1.1--1.2, the system should choose to run 1.1 and then
 1.1--1.2, not 1.0 and then 1.0--1.1 and then 1.1--1.2.  But that can
 be automatic: only if there are two paths of equal length (as in the
 example in the previous paragraph) do we need help from the user to
 figure out what to do.

 Why do we need help from the user? Just pick a path.

 For an extension update, I understand why someone wouldn't want to
 accidentally downgrade 5 versions (dropping all of their dependent
 objects) before updating to the latest. But this doesn't apply to
 creation.

I suppose.  But suppose we have 1.0, 1.1, 1.0--1.2, and 1.1--1.2.
Suppose further that 1.1 drops some interfaces present in 1.0, and 1.2
adds new stuff.  If the system chooses to run 1.0 and then 1.0--1.2,
it'll create all the deprecated interfaces and then drop them again.
Now maybe that won't cause any problems, but I bet it will. For
example, consider hstore again.  If we eventually disallow = as an
operator altogether, the 1.0 script won't even run any more.

Of course that doesn't matter for core because we've removed it
entirely from our repository and don't ship it any more, but an
out-of-core extension might well keep around more old scripts than we
do, to make it easier to use the same bundle with multiple server
versions.  Imagine, for example, that 1.0 only works on 9.4 or earlier
and 1.2 only works on releases 9.2 or later.  The extension author
wants to keep the 1.0 script around for the benefit of people who
haven't upgraded, so that they can still install the older version
that works there; but he also need the 1.1 base version to be
preferred to the 1.0 base version, else installation of 1.2 on 10.0+
will fail completely.  My experience with software upgrades is that
cases like this, and even weirder things, happen pretty routinely, so
I think more control is good.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Peter Eisentraut
On 12/2/13, 9:14 AM, Dimitri Fontaine wrote:
 What I want to build is an “extension distribution” software that knows
 how to prepare anything from PGXN (and other places) so that it's fully
 ready for being used in the database. Then the main client would run as
 a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
 prepared extension for you and make it available, then leaving the main
 command operate as intended.
 
 Which is what I think the pex extension is doing, and that's not
 coincidental, but it runs the build step on the PostgreSQL server itself
 and needs to have a non-trivial set of file-system privileges to be
 doing so, and even needs to get root privileges with sudo for some of
 its operations.

You're thinking of autopex, and while that works, and can be made to
work better with certain small changes, I don't think it can ever be the
only solution.  Many interesting extensions will have external packages
build and run-time dependencies, and you need file-system level access
to manage that.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Peter Eisentraut
On 12/2/13, 2:33 PM, Greg Stark wrote:
 Just tossing an idea out there. What if you could install an extension
 by specifying not a local file name but a URL. Obviously there's a
 security issue but for example we could allow only https URLs with
 verified domain names that are in a list of approved domain names
 specified by a GUC.

This is similar to what autopex does (https://github.com/petere/autopex).


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Peter Eisentraut
On 12/3/13, 9:20 AM, Stephen Frost wrote:
 Another option, which I generally like better, is to have a new package
 format for PGXN that contains the results of make install,
 more-or-less, synonymous to Debian source vs. .deb packages.
 
 Perhaps we could even have psql understand that format and be able to
 install the extension via a backslash command instead of having an
 external tool, but I think an external tool for dependency tracking and
 downloading of necessary dependencies ala Debian would be better than
 teaching psql to do that.

How would that handle varying file system layouts on the backend?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Stephen Frost
* Peter Eisentraut (pete...@gmx.net) wrote:
 On 12/3/13, 9:20 AM, Stephen Frost wrote:
  Another option, which I generally like better, is to have a new package
  format for PGXN that contains the results of make install,
  more-or-less, synonymous to Debian source vs. .deb packages.
  
  Perhaps we could even have psql understand that format and be able to
  install the extension via a backslash command instead of having an
  external tool, but I think an external tool for dependency tracking and
  downloading of necessary dependencies ala Debian would be better than
  teaching psql to do that.
 
 How would that handle varying file system layouts on the backend?

This discussion is all about catalog-only extensions and therefore we
don't really care about anything filesystem related...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-04 Thread Jeff Davis
On Wed, 2013-12-04 at 15:28 -0500, Robert Haas wrote:
 My experience with software upgrades is that
 cases like this, and even weirder things, happen pretty routinely, so
 I think more control is good.

There would still be control: just use full SQL scripts appropriately.

I'm sure there's still room for surprise as extensions become more
complex. But ultimately, those surprises will happen because of broken
upgrade/downgrade scripts, and if those are broken, the user is probably
in for a surprise in the near future anyway.

It's fine with me if we help alleviate these problems by using a proper
system to organize these upgrades/downgrades. But everything proposed
seems pretty bad from the perspective of an extension author -- extra
documentation, extra ceremony, more room for error, and more maintenance
every time they release a new version. And once we document it, we have
to support those behaviors for a long time, which will almost certainly
prevent a better solution later.

I think we should just make it simple:
 * If there is a full SQL script of the given version, we guarantee that
we'll execute that one.
 * Otherwise, we execute the shortest path from a full version to the
requested version.
 * If there's a tie, throw an error.

That leaves us with plenty of room to improve the situation later, for
instance if we support ordered versions. (I'm not sure if ordered
versions was rejected outright, or we just didn't have time to do it
properly.)

Regards,
Jeff Davis





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Heikki Linnakangas

On 12/03/2013 09:25 AM, Jeff Davis wrote:

On Mon, 2013-12-02 at 11:07 +0200, Heikki Linnakangas wrote:

There should be no difference between file-based extensions and
catalog-based extensions. It's just two different ways to install the
same extension. The extension author doesn't need to care about that,
it's the DBA that decides which method to use to install it.

I'm going to object loudly to any proposal that doesn't meet that criteria.


But we're arguably already in this position today. For a SQL-only
extension, the author can choose between:

1. Using a schema/namespace
   a. installable over libpq
   b. installable by non-superusers
   c. no special handling when it comes to administration

2. Using an extension
   a. convenient metadata (e.g. requires)
   b. upgrade scripts
   c. omitted from pg_dump so can be managed separately
   d. relocatable
   e. not tied to one schema

And if the author decides to change, it requires porting the extension
to the other form.

Note: I'm using extension loosely here. We might call the SQL-only,
user-installable variety something else.


Good point. It's not too hard to install an extension written as an 
extension as plain schema objects, though. You can just run the .sql 
script through psql. That's what you used to do before extensions were 
invented. (the scripts in contrib contain an explicit check against 
that, but I don't think that's common outside contrib)


Another perspective is that that's already a situation we'd rather not 
have. Let's not make it worse by introducing a third way to install an 
extension, which again requires the extension author to package the 
extension differently.



So how do we get to the point where we have clear advice to the author
of a SQL-only extension? And how do we do that without asking them to
port anything?


Yeah, that's the crucial question of this whole thread.


Stephen mentioned using external tools and/or metadata, but to me that
sounds like it would require porting the extension away from what's on
PGXN today.


Why? The external tool can pick the extension in its current form from 
PGXN, and install it via libpq. The tool might have to jump through some 
hoops to do it, and we might need some new backend functionality to 
support it, but I don't see why the extension author needs to do anything.


That said, it might make the tool easier to write if we place some new 
requirements for extension authors. Like, stipulate that the .sql file 
is in the top-level directory of the extension tarball. But the same 
extension would still be installable with make; make install.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Robert Haas
On Tue, Dec 3, 2013 at 1:31 AM, Jeff Davis pg...@j-davis.com wrote:
 On Sun, 2013-12-01 at 15:48 +0100, Dimitri Fontaine wrote:
 Jeff Davis pg...@j-davis.com writes:
  I don't see why we are trying to accommodate a case where the author
  doesn't offer enough full SQL scripts and offers broken downgrade
  scripts; or why that case is different from offering broken upgrade
  scripts.

 That's fair enough I guess. I will work on automating the choice of the
 first full script to use then, for next patch version.

 Can we separate this feature out? It's an issue with extensions today,
 and I'm eager to make some progress after the explosion of differing
 opinions today.

+1 for separating that part out.  I thought it was separated, at some point.

 Robert, do you think this is an acceptable approach to solve your pet
 peeve here:

 http://www.postgresql.org/message-id/CA
 +tgmoae3qs4qbqfxouzzfxrsxa0zy8ibsoysuutzdumpea...@mail.gmail.com

I'd need to look exactly what's being proposed in more detail.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Greg Stark
On Mon, Dec 2, 2013 at 7:46 PM, Robert Haas robertmh...@gmail.com wrote:
 Just tossing an idea out there. What if you could install an extension
 by specifying not a local file name but a URL. Obviously there's a
 security issue but for example we could allow only https URLs with
 verified domain names that are in a list of approved domain names
 specified by a GUC.

 That's a different feature, but I don't see anything preventing
 someone from implementing that as an extension, today, without any
 core support at all.  It would only be usable in cases where the share
 directory is writable by the database server (i.e. low-security
 installations) and you'd have to make it a function call rather than
 piggybacking on CREATE EXTENSION, but neither of those things sound
 bad to me.  (And if they are bad, they could be addressed by providing
 hooks or event triggers, leaving the rest of the functionality in the
 extension module.)

Well none of this isn't implementable as an extension if you have
write access to the database server's share directory.

This is all about UI. CREATE EXTENSION is about having the core do the
bookkeeping about which files belong to which version of which
extension.

I thought the fundamental problem the in-catalog extensions were
trying to solve were the issue with not having access to the
filesystem. If that's the case then being able to say create extension
from http://... would solve that.

If the fundamental problem is that you want multi-tenant databases to
be able to have different .so files visible depending on which
database is opened then that's a bit trickier.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
  On 3 December 2013 02:02, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
  ISTM that the real solution to this particular problem is to decouple
  the extensions that are currently in contrib from a specific postgres
  version.
 
 Problem?  It's not a bug that you get hstore 1.2 when you dump from 9.2
 and reload into 9.3; that's a feature.  You wanted an upgrade, presumably,

I don't buy this argument at *all* and it's not going to fly when we've
got multiple versions of an extension available concurrently.  I'm
willing to accept that we have limitations when it comes from a
packaging perspective (today) around extensions but the notion that my
backup utility should intentionally omit information which is required
to restore the database to the same state it was in is ridiculous.

 or you'd not have been going to 9.3 in the first place.  

This notion that a given major version of PG only ever has one version
of an extension associated with it is *also* wrong and only makes any
sense for contrib extensions- which are the exception rather than the
rule today.

 The entire reason
 why the extension mechanism works like it does is to allow that sort of
 reasonably-transparent upgrade.  It would not be a step forward to break
 that by having pg_dump prevent it (which it would fail to do anyway,
 likely, since the receiving installation might not have 1.1 available
 to install).

I agree that there should be a way to *allow* such an upgrade to happen
transparently and perhaps we keep it the way it is for contrib
extensions as a historical artifact, but other extensions are
independent of the PG major version and multiple versions will be
available concurrently for them and having pg_dump willfully ignore the
extension version is a receipe for broken backups.

Step back and consider a user who is just trying to restore his backup
of his 9.2 database into a new server, also with 9.2, as quickly as he
can to get his system online again.  Having four different versions of
extension X installed and available for 9.2, no clue or information
about which version was installed into which databases and getting
mysterious failures and errors because they're not all compatible is not
what anyone is going to want to deal with in that situation.

I certainly don't see extensions (outside of contrib) in the general
sense as being either tied to specific PG versions or being required to
maintain the same API that they started with on day 1.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 Can we separate this feature out? It's an issue with extensions today,
 and I'm eager to make some progress after the explosion of differing
 opinions today.

 +1 for separating that part out.  I thought it was separated, at some point.

  
http://www.postgresql.org/message-id/caltqxtetvi-exhdbspuey3trthug51eswuod8cur2t+rxtg...@mail.gmail.com
  
  http://www.postgresql.org/message-id/m2r4k8jpfl@2ndquadrant.fr

The only way for to bugfix all the reported problems had been to have
regression testing… and it's become a necessary dependency of the
extension templates patch, so I just included it in.

My interdependent git branches development fu seems to have totally
disappeared after the main extension patch that needed 7 of thoses…

 I'd need to look exactly what's being proposed in more detail.

What I did propose is a new GUC default_full_version:

+   termvarnamedefault_full_version/varname 
(typestring/type)/term
+   listitem
+para
+ This option allows an extension author to avoid shiping all versions
+ of all scripts when shipping an extension. When a version is requested
+ and the matching script does not exist on disk,
+ set replaceabledefault_full_version/replaceable to the first
+ script you still ship and PostgreSQL will apply the intermediate
+ upgrade script as per the commandALTER EXTENSION UPDATE/command
+ command.
+/para
+para
+ For example, say you did provide the extension literalpair/literal
+ version literal1.0/literal and are now providing the
+ version literal1.1/literal. If you want both current and new users
+ to be able to install the new version, you can provide both the
+ scripts literalpair--1.0--1.1.sql/literal
+ and literalpair--1.1.sql/literal, adding to the already
+ existing literalpair--1.0.sql/literal.
+/para
+para
+ When specifying literaldefault_version/literal
+ and literaldefault_full_version = 1.0/literal you can instead only
+ provide only the scripts literalpair--1.0.sql/literal
+ and literalpair-1.0--1.1.sql/literal. The commandCREATE
+ EXTENSION pair;/command will then automatically use the afore
+ mentionned scripts to install version 1.0 then update it to 1.1.
+/para
+   /listitem

What Jeff is proposing is to simplify that down and have PostgreSQL auto
discover the upgrade cycle when the version asked for isn't directly
available with a creation script.

We would keep the behavior depicted here, just in a fully automated way.

Working on a separate patch for that, then.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Tom Dunstan (pg...@tomd.cc) wrote:
 Extensions in contrib live in a weird place. Totally builtin stuff
 should obviously be dumped without versions, and stuff which is
 completely separate and follows its own release schedule should
 obviously be versioned. I guess we consider all modules in contrib to
 offer the same transparent upgrade guarantees as builtins, so they
 shouldn't be versioned, but it feels like some of them should be, if
 only because some aren't particularly tied in to the backend all that
 tightly. But I guess that's a bogus metric, the true metric is whether
 we want people to treat them as basically built-in, with the upgrade
 guarantees that go along with that.

Note that we don't actually make guarantees about either builtins or
contrib modules when it comes to major version upgrades.  The current
way we package contrib and how we tie contrib releases to PG releases
means that we can get away with omitting the version and saying well,
if you restore to the same PG major version then you'll get the same
contrib extension version, and if you restore into a different version
then obviously you want the version of contrib with that major
version but that *only* works for contrib.  We need to accept that
other extensions exist and that they aren't tied to PG major versions
nor to our release schedule.  There's a lot more extensions out there
today which are *not* in contrib than there are ones which *are*.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 Stephen mentioned using external tools and/or metadata, but to me that
 sounds like it would require porting the extension away from what's on
 PGXN today.

Not at all- and that'd be the point.  An external tool could take the
PGXN extension, run 'make', then 'make install' (into a userland
directory), extract out the script and then, with a little help from PG,
run that script in extension creation mode via libpq.

Another option, which I generally like better, is to have a new package
format for PGXN that contains the results of make install,
more-or-less, synonymous to Debian source vs. .deb packages.

Perhaps we could even have psql understand that format and be able to
install the extension via a backslash command instead of having an
external tool, but I think an external tool for dependency tracking and
downloading of necessary dependencies ala Debian would be better than
teaching psql to do that.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Greg Stark (st...@mit.edu) wrote:
 I thought the fundamental problem the in-catalog extensions were
 trying to solve were the issue with not having access to the
 filesystem. If that's the case then being able to say create extension
 from http://... would solve that.

That's not really 'solved' unless you feel we can depend on that create
extension from URL to work at pg_restore time...  I wouldn't have
guessed that people would accept that, but I've already been wrong about
such things in this thread once.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Robert Haas
On Tue, Dec 3, 2013 at 8:44 AM, Stephen Frost sfr...@snowman.net wrote:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
  On 3 December 2013 02:02, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
  ISTM that the real solution to this particular problem is to decouple
  the extensions that are currently in contrib from a specific postgres
  version.

 Problem?  It's not a bug that you get hstore 1.2 when you dump from 9.2
 and reload into 9.3; that's a feature.  You wanted an upgrade, presumably,

 I don't buy this argument at *all* and it's not going to fly when we've
 got multiple versions of an extension available concurrently.  I'm
 willing to accept that we have limitations when it comes from a
 packaging perspective (today) around extensions but the notion that my
 backup utility should intentionally omit information which is required
 to restore the database to the same state it was in is ridiculous.

 or you'd not have been going to 9.3 in the first place.

 This notion that a given major version of PG only ever has one version
 of an extension associated with it is *also* wrong and only makes any
 sense for contrib extensions- which are the exception rather than the
 rule today.

 The entire reason
 why the extension mechanism works like it does is to allow that sort of
 reasonably-transparent upgrade.  It would not be a step forward to break
 that by having pg_dump prevent it (which it would fail to do anyway,
 likely, since the receiving installation might not have 1.1 available
 to install).

 I agree that there should be a way to *allow* such an upgrade to happen
 transparently and perhaps we keep it the way it is for contrib
 extensions as a historical artifact, but other extensions are
 independent of the PG major version and multiple versions will be
 available concurrently for them and having pg_dump willfully ignore the
 extension version is a receipe for broken backups.

 Step back and consider a user who is just trying to restore his backup
 of his 9.2 database into a new server, also with 9.2, as quickly as he
 can to get his system online again.  Having four different versions of
 extension X installed and available for 9.2, no clue or information
 about which version was installed into which databases and getting
 mysterious failures and errors because they're not all compatible is not
 what anyone is going to want to deal with in that situation.

I think Tom's original idea here was that new versions of extensions
*shouldn't ever* be backward-incompatible, and therefore if this
problem arises it's the extension author's fault.  It isn't however
clear that this dream is likely to be realized in practice.  For
example, the only difference between hstore 1.0 and hstore 1.1 is that
we dropped the = operator, for the very good reason that we have been
slowly working towards deprecating = as an operator name so that we
can eventually use it for the purpose that the SQL standard specifies.
 Given that we've done it in core, we can hardly say that no one will
ever do this anywhere else.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Robert Haas
On Tue, Dec 3, 2013 at 8:43 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 What Jeff is proposing is to simplify that down and have PostgreSQL auto
 discover the upgrade cycle when the version asked for isn't directly
 available with a creation script.

 We would keep the behavior depicted here, just in a fully automated way.

 Working on a separate patch for that, then.

I like the idea of making it automatic, but it won't work in all
cases.  For example, suppose someone ships 1.0, 1.0--1.2, 1.1, and
1.1--1.2.  Since versions aren't intrinsically ordered, the system has
no way of knowing whether it's preferable to run 1.0 and then 1.0--1.2
or instead run 1.1 and then 1.1--1.2.  So I think we will need either
to introduce a way of ordering version numbers (which Tom has
previously opposed) or some concept like your default_full_version.

In more normal cases, however, the system can (and probably should)
figure out what was intended by choosing the *shortest* path to get to
the intended version.  For example, if someone ships 1.0, 1.0--1.1,
1.1, and 1.1--1.2, the system should choose to run 1.1 and then
1.1--1.2, not 1.0 and then 1.0--1.1 and then 1.1--1.2.  But that can
be automatic: only if there are two paths of equal length (as in the
example in the previous paragraph) do we need help from the user to
figure out what to do.

We should also consider the possibility of a user trying to
deliberately install and older release.  For example, if the user has
1.0, 1.0--1.1, 1.1, 1.1--1.2, and 1.2--1.0 (a downgrade script) with
default_full_version = 1.2, an attempt to install 1.0 should run just
the 1.0 script, NOT 1.2 and then 1.2--1.0.

Putting all that together, I'm inclined to suggest that what we really
need is a LIST of version numbers, rather than just one.  If there one
path to the version we're installing is shorter than any other, we
choose that, period.  If there are multiple paths of equal length, we
break the tie by choosing which version number appears first in the
aforementioned list.  If that still doesn't break the tie, either
because none of the starting points are mentioned in that list or
because there are multiple equal-length paths starting in the same
place, we give up and emit an error.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 That's not really 'solved' unless you feel we can depend on that create
 extension from URL to work at pg_restore time...  I wouldn't have
 guessed that people would accept that, but I've already been wrong about
 such things in this thread once.

Basically, with the extra software I want to build out-of-core, what you
have is an externally maintained repository and the scripts are
downloaded at CREATE EXTENSION time.

With the Extension Template, you then have a solid cache you can rely on
at pg_restore time.

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  That's not really 'solved' unless you feel we can depend on that create
  extension from URL to work at pg_restore time...  I wouldn't have
  guessed that people would accept that, but I've already been wrong about
  such things in this thread once.
 
 Basically, with the extra software I want to build out-of-core, what you
 have is an externally maintained repository and the scripts are
 downloaded at CREATE EXTENSION time.

I should have included above unless we actually dump the extension
objects out during pg_dump.

 With the Extension Template, you then have a solid cache you can rely on
 at pg_restore time.

Extension templates are not needed for us to be able to dump and restore
extensions.  I do not understand why you continue to argue for extension
templates as a solution to that problem when it's utter overkill and far
worse than just dumping the extension objects out at pg_dump time and
having a way to add them back as part of the extension on restore.

I understand that you once proposed that and it was shot down but I
think we need to move past that now that we've seen what the alternative
is..  That isn't to say anything about the code or about you
specifically, but, for my part, I really don't like nor see the value of
sticking script blobs into the catalog as some kind of representation of
database objects.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 In more normal cases, however, the system can (and probably should)
 figure out what was intended by choosing the *shortest* path to get to
 the intended version.  For example, if someone ships 1.0, 1.0--1.1,
 1.1, and 1.1--1.2, the system should choose to run 1.1 and then
 1.1--1.2, not 1.0 and then 1.0--1.1 and then 1.1--1.2.  But that can
 be automatic: only if there are two paths of equal length (as in the
 example in the previous paragraph) do we need help from the user to
 figure out what to do.

Yeah.

 We should also consider the possibility of a user trying to
 deliberately install and older release.  For example, if the user has
 1.0, 1.0--1.1, 1.1, 1.1--1.2, and 1.2--1.0 (a downgrade script) with
 default_full_version = 1.2, an attempt to install 1.0 should run just
 the 1.0 script, NOT 1.2 and then 1.2--1.0.

In what I did, if you want version 1.0 and we have a script --1.0.sql
around, then we just use that script, never kicking the path chooser.

The path chooser at CREATE EXTENSION time is only execised when we don't
have a direct script to support that specific version you're asking.

 Putting all that together, I'm inclined to suggest that what we really
 need is a LIST of version numbers, rather than just one.  If there one
 path to the version we're installing is shorter than any other, we
 choose that, period.  If there are multiple paths of equal length, we

That's what Jeff did propose, yes.

 break the tie by choosing which version number appears first in the
 aforementioned list.  If that still doesn't break the tie, either
 because none of the starting points are mentioned in that list or
 because there are multiple equal-length paths starting in the same
 place, we give up and emit an error.

Jeff also did mention about tiebreakers without entering into any level
of details.

We won't be able to just use default_version as the tiebreaker list
here, because of the following example:

  default_version = 1.2, 1.0

  create extension foo version '1.1';

With such a setup it would prefer 1.2--1.1 to 1.0--1.1, which doesn't
look like what we want. Instead, we want

  default_version = 1.2
  create_from_version_candidates = 1.0

  create extension foo version '1.1';

Then the tie breaker is the 1.0 in create_from_version_candidates so
we would run foo--1.0.sql and then foo--1.0--1.1.sql.

Comments?

Baring objections, I'm going to prepare a new branch to support
developping that behavior against only file based extensions, and submit
a spin-off patch to the current CF entry.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 I understand that you once proposed that and it was shot down but I
 think we need to move past that now that we've seen what the alternative
 is..  That isn't to say anything about the code or about you
 specifically, but, for my part, I really don't like nor see the value of
 sticking script blobs into the catalog as some kind of representation of
 database objects.

Well yeah, I'm having quite a hard time to withdraw that proposal, which
is the fourth one in three years, and that had been proposed to me on
this very mailing list, and got the infamous community buy-in about a
year ago.

It's always a hard time when you're being told that the main constraints
you had to work with suddenly are no more, because after all this work,
we realize that imposing those constraints actually made no sense.

I understand that it can happen, it still really sucks when it does.

  delusionnal paragraph, censored for lack of humour (incl. sarcasm)

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread David E. Wheeler
On Dec 3, 2013, at 9:14 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:

 I understand that it can happen, it still really sucks when it does.
 
  delusionnal paragraph, censored for lack of humour (incl. sarcasm)

I have not followed this project closely, Dimitri, but I for one have 
appreciated your tenacity in following through on it. Extensions are awesome, 
thanks to you, and I’m happy to see all efforts to make it more so.

Thank you.

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Jeff Davis
On Tue, 2013-12-03 at 10:08 +0200, Heikki Linnakangas wrote:
 Another perspective is that that's already a situation we'd rather not 
 have. Let's not make it worse by introducing a third way to install an 
 extension, which again requires the extension author to package the 
 extension differently.

+1.

 Why? The external tool can pick the extension in its current form from 
 PGXN, and install it via libpq. The tool might have to jump through some 
 hoops to do it, and we might need some new backend functionality to 
 support it, but I don't see why the extension author needs to do anything.

Ideally, it would end up the same whether it was installed by either
method -- the same entries in pg_extension, etc. I assume that's what
you mean by new backend functionality.

This sounds like Inline Extensions to me, which was previously proposed.

If I recall, that proposal trailed off because of issues with
dump/reload. If you dump the contents of the extension, it's not really
an extension; but if you don't, then the administrator can't back up the
database (because he might not have all of the extension templates for
the extensions installed). That's when the idea appeared for extension
templates stored in the catalog, so that the administrator would always
have all of the necessary templates present.

A few interesting messages I found:
http://www.postgresql.org/message-id/CA
+TgmobJ-yCHt_utgJJL9WiiPssUAJWFd=3=ulrob9nhbpc...@mail.gmail.com
http://www.postgresql.org/message-id/CA
+tgmoztujw7beyzf1dzdcrbg2djcvtyageychx8d52oe1g...@mail.gmail.com
http://www.postgresql.org/message-id/CA
+tgmoa_0d6ef8upc03qx0unhjfzozeosno_ofucf5jgw+8...@mail.gmail.com
http://www.postgresql.org/message-id/18054.1354751...@sss.pgh.pa.us

(+Robert,Tom because I am referencing their comments)

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Jeff Davis
On Tue, 2013-12-03 at 09:20 -0500, Stephen Frost wrote:
 * Jeff Davis (pg...@j-davis.com) wrote:
  Stephen mentioned using external tools and/or metadata, but to me that
  sounds like it would require porting the extension away from what's on
  PGXN today.
 
 Not at all- and that'd be the point.  An external tool could take the
 PGXN extension, run 'make', then 'make install' (into a userland
 directory), extract out the script and then, with a little help from PG,
 run that script in extension creation mode via libpq.

What is stopping Extension Templates, as proposed, from being this
special extension creation mode? What would be a better design?

It seems like the porting issue is just a matter of finding someone to
write a tool to reliably translate packages from PGXN into a form
suitable to be sent using SQL commands; which we would need anyway for
this special mode.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 This sounds like Inline Extensions to me, which was previously proposed.

I've not looked at that proposal very carefully, but I agree that what
we're describing is a lot closer to 'inline extensions' than 'extension
templates'.

 If I recall, that proposal trailed off because of issues with
 dump/reload. If you dump the contents of the extension, it's not really
 an extension; but if you don't, then the administrator can't back up the
 database (because he might not have all of the extension templates for
 the extensions installed). That's when the idea appeared for extension
 templates stored in the catalog, so that the administrator would always
 have all of the necessary templates present.

When it comes to dump/reload, I'd much rather see a mechanism which uses
our deep understanding of the extension's objects (as database objects)
to implement the dump/reload than a text blob which is carried forward
from major version to major version and may even fail to run.  I
realize that's different from extension files which are out on the
filesystem, but I do not see that as a bad thing.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 When it comes to dump/reload, I'd much rather see a mechanism which uses
 our deep understanding of the extension's objects (as database objects)
 to implement the dump/reload than a text blob which is carried forward
 from major version to major version and may even fail to run.

Note that we're already doing that in the binary_upgrade code path.
I agree that generalizing that approach sounds like a better idea
than keeping a text blob around.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 On Tue, 2013-12-03 at 09:20 -0500, Stephen Frost wrote:
  * Jeff Davis (pg...@j-davis.com) wrote:
   Stephen mentioned using external tools and/or metadata, but to me that
   sounds like it would require porting the extension away from what's on
   PGXN today.
  
  Not at all- and that'd be the point.  An external tool could take the
  PGXN extension, run 'make', then 'make install' (into a userland
  directory), extract out the script and then, with a little help from PG,
  run that script in extension creation mode via libpq.
 
 What is stopping Extension Templates, as proposed, from being this
 special extension creation mode? What would be a better design?

The extra catalog tables which store SQL scripts in text columns is one
of my main objections to the as-proposed Extension Templates.  I view
those scripts as a poor man's definition of database objects which are
defined properly in the catalog already.  The other big issue is that
there isn't an easy way to see how we could open up the ability to
create extensions to non-superusers with this approach.

What I think we should really be mulling over is if we need anything
further when it comes to non-superuser extensions; a new namespace (eg:
schemas for extensions, or maybe prefix for user extensions, or just a
notion of ownership which gets combined with the name when doing
operations with an extension)?  a new name (not extensions, but
something else)?

 It seems like the porting issue is just a matter of finding someone to
 write a tool to reliably translate packages from PGXN into a form
 suitable to be sent using SQL commands; which we would need anyway for
 this special mode.

That's what I was thinking and hoping. :)  Of course, we haven't yet
figured out exactly what we want this special mode to look like, so it's
a bit tricky to ask anyone to write such a tool.  I keep thinking this
should be something like: create a schema, set the search path to that
schema, run the extension script more-or-less as is, then 'register'
that schema as being an extension with a certain version.  That
'registration' process could also handle renaming the schema, if the
user wants the extension in a different schema (or perhaps the initial
schema was some kind of temporary schema) or moving the objects into
an existing schema, if that's what is requested.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Jeff Davis
On Tue, 2013-12-03 at 14:41 -0500, Stephen Frost wrote:
 * Jeff Davis (pg...@j-davis.com) wrote:
 The extra catalog tables which store SQL scripts in text columns is one
 of my main objections to the as-proposed Extension Templates.  

OK, that's what I thought. This seems like the root of your objection,
so let's focus here.

 The other big issue is that
 there isn't an easy way to see how we could open up the ability to
 create extensions to non-superusers with this approach.

Is this because of the namespace issue, or is there another problem
here, too?

 What I think we should really be mulling over is if we need anything
 further when it comes to non-superuser extensions; a new namespace (eg:
 schemas for extensions, or maybe prefix for user extensions, or just a
 notion of ownership which gets combined with the name when doing
 operations with an extension)?  a new name (not extensions, but
 something else)?

Agreed. Adding namespaces would best be done sooner rather than later.

 That's what I was thinking and hoping. :)  Of course, we haven't yet
 figured out exactly what we want this special mode to look like, so it's
 a bit tricky to ask anyone to write such a tool.  I keep thinking this
 should be something like: create a schema, set the search path to that
 schema, run the extension script more-or-less as is, then 'register'
 that schema as being an extension with a certain version.  That
 'registration' process could also handle renaming the schema, if the
 user wants the extension in a different schema (or perhaps the initial
 schema was some kind of temporary schema) or moving the objects into
 an existing schema, if that's what is requested.

An interesting idea to rely on schemas like that, but it seems a little
hackish. I'd prefer something that would be sane for a user to do
without the assistance of a tool. We can still recommend that they use
the PGXN format and the tool, of course.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-03 Thread Jeff Davis
On Tue, 2013-12-03 at 14:31 -0500, Tom Lane wrote:
 Stephen Frost sfr...@snowman.net writes:
  When it comes to dump/reload, I'd much rather see a mechanism which uses
  our deep understanding of the extension's objects (as database objects)
  to implement the dump/reload than a text blob which is carried forward
  from major version to major version and may even fail to run.
 
 Note that we're already doing that in the binary_upgrade code path.
 I agree that generalizing that approach sounds like a better idea
 than keeping a text blob around.

So does this take us fully back to Inline Extensions, or is there a
distinction that I'm missing?

I still don't see that Extension Templates are all bad:
  * They preserve the fact that two instances of the same extension
(e.g. in different databases) were created from the same template.
  * They mirror the file-based templates, so it seems easier to get
consistent behavior.

Regards,
Jeff Davis





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Heikki Linnakangas

On 12/02/2013 05:34 AM, Stephen Frost wrote:

* Jeff Davis (pg...@j-davis.com) wrote:

I see where you're coming from, but after some thought, and looking at
the patch, I think we really do want a catalog representation for (at
least some) extensions.


Perhaps I'm missing something- but we already *have* a catalog
representation for every extension that's ever installed into a given
database.  A representation that's a heck of a lot better than a big
text blob.


Right. I think Jeff was thinking of a catalog representation for 
extensions that haven't been installed yet, but are available in the 
system and could be installed with CREATE EXTENSION foo. I wouldn't mind 
having a catalog like that. Even without any of this extension template 
stuff, it would be handy to have a view that lists all the extensions 
available in the filesystem.



   2. When 9.4 gets released, we need some solid advice for extension
authors. If they have a native shared library, I assume we just tell
them to keep using the file-based templates. But if they have a SQL-only
extension, do we tell them to port to the in-catalog templates? What if
they port to in-catalog templates, and then decide they just want to
optimize one function by writing it in native code? Do they have to port
back? What should the authors of SQL-only extensions distribute on PGXN?
Should there be a migration period where they offer both kinds of
templates until they drop support for 9.3?


This is one of the main things that I think Heikki was trying to drive
at with his comment- we really don't *want* to make extension authors
have to do anything different than what they do today.  With an external
tool, they wouldn't need to and it would just be two different ways for
an extension to be installed into a given database.  In the end though,
if we're telling people to 'port' their extensions, then I think we've
already lost.


Exactly.

There should be no difference between file-based extensions and 
catalog-based extensions. It's just two different ways to install the 
same extension. The extension author doesn't need to care about that, 
it's the DBA that decides which method to use to install it.


I'm going to object loudly to any proposal that doesn't meet that criteria.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Andres Freund
On 2013-12-02 11:07:28 +0200, Heikki Linnakangas wrote:
 Perhaps I'm missing something- but we already *have* a catalog
 representation for every extension that's ever installed into a given
 database.  A representation that's a heck of a lot better than a big
 text blob.
 
 Right. I think Jeff was thinking of a catalog representation for extensions
 that haven't been installed yet, but are available in the system and could
 be installed with CREATE EXTENSION foo. I wouldn't mind having a catalog
 like that. Even without any of this extension template stuff, it would be
 handy to have a view that lists all the extensions available in the
 filesystem.

Luckily that's already there: SELECT * FROM pg_available_extensions;

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Heikki Linnakangas hlinnakan...@vmware.com writes:
 Right. I think Jeff was thinking of a catalog representation for extensions
 that haven't been installed yet, but are available in the system and could
 be installed with CREATE EXTENSION foo. I wouldn't mind having a catalog
 like that. Even without any of this extension template stuff, it would be
 handy to have a view that lists all the extensions available in the
 filesystem.

  http://www.postgresql.org/docs/9.1/static/view-pg-available-extensions.html
  
http://www.postgresql.org/docs/9.1/static/view-pg-available-extension-versions.html

 There should be no difference between file-based extensions and
 catalog-based extensions. It's just two different ways to install the same
 extension. The extension author doesn't need to care about that, it's the
 DBA that decides which method to use to install it.

Agreed.

 I'm going to object loudly to any proposal that doesn't meet that criteria.

Please be kind enough to poin me where my current patch is drifting away
from that criteria. What you're proposing here is what I think I have
been implementing.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
  Having a versioning notion (and whatever other meta data we, or an
  extension author, feels is useful) for what are otherwise simple containers
  (aka the schematic we already have..) makes sense and it would be great to
  provide support around that, but not this duplication of
  object definitions.
 
 I don't like duplication either, we've just been failing to find any
 alternative with pg_restore support for the last 3 years.

 *That doesn't make this approach the right one*.  If anything, I'm
 afraid we've ended up building ourselves a rube goldberg machine because
 of this constant struggle to fit a square peg into a round hole.

This duplication you're talking about only applies to CREATE EXTENSION.

I don't know of any ways to implement ALTER EXTENSION … UPDATE …
behaviour without a separate set of scripts to apply in a certain order
depending on the current and target versions of the extension.

If you know how to enable a DBA to update a set of objects in a database
only with information already found in the database, and in a way that
this information is actually *not* an SQL script, I'm all ears.

 That's basically what we already do with schemas today and hence is
 pretty darn close to what I'm proposing.  Perhaps it'd be a way to
 simply version schemas themselves- heck, with that, we could even
 provide that oft-asked-for schema delta tool in-core by being able to
 deduce the differences between schema at version X and schema at
 version Y.

Given that at any moment you have a single version of the schema
installed, I don't know how you're supposed to be able to do that?

Maybe you mean by tracking the changes at update time? Well that at
least would be a good incentive to have Command String access in event
triggers, I guess.

 That would work beautifully, and of course you would have to do that
 again manually at pg_restore time after CREATE DATABASE and before
 pg_restore, or you would need to change the fact that extensions objects
 are not part of your pg_dump scripts, or you would have to name your new
 thing something else than an extension.

 We would need a way to dump and restore this, of course.

Which is available in the current patch, of course.

 Having a management system for sets of objects is a *great* idea- and
 one which we already have through schemas.  What we don't have is any
 kind of versioning system built-in or other metadata about it, nor do we
 have good tooling which leverages such a versioning or similar system.

Exactly.

How can we implement ALTER OBJECT … UPDATE TO VERSION without having
access to some SQL scripts?

The current patch offers a way to manage those scripts and apply them,
with the idea that the people managing the scripts (extension authors)
and the people applying them (DBAs) are not going to be the same people,
and that it's then possible to have to apply more than a single script
for a single UPDATE command.

 I really just don't see this as being either particularly useful nor
 feasible within a reasonable amount of effort.  Shared libraries are
 really the perview of the OS packaging system.  If you want to build
 some tool which is external to PG but helps facilitate the building and
 installing of shared libraries, but doesn't use the OS packaging system
 (and, instead, attempts to duplicate it) then go for it, but don't
 expect to ship or install that through the PG backend.

I'll give you that implementing Event Triggers just to be able to build
what you're talking about on top of it and out of core might not be
called “a reasonable amount of effort.”

 The problem found here is that if a non privileged user installs an
 extension template named “pgcyrpto” then the superuser installs what he
 believes is the extension “pgcrypto”, the malicious unprivileged user
 now is running his own code (extension install script) as a superuser.

 For my part, the problem here is this notion of extension templates in
 the PG catalog and this is just one symptom of how that's really not a
 good approach.

The only reason for that being the case is that you suppose that root on
the file system is more trustworthy as an entity than postgres on the
file system or any superuser in the PostgreSQL service.

As soon as you question that, then you might come to realise the only
difference in between file-system templates and catalog templates is our
ability to deal with the problem, rather than the problem itself.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Heikki Linnakangas (hlinnakan...@vmware.com) wrote:
 Right. I think Jeff was thinking of a catalog representation for
 extensions that haven't been installed yet, but are available in the
 system and could be installed with CREATE EXTENSION foo.

I really don't particularly see value in this unless it hooks into PGXN
or similar somehow (ala how an apt repository works).  I just don't see
the point if users have to install templates to then get a list of what
extensions they have available to install.  The whole 'extension
template' piece of this just ends up being overhead and gets in the way.

 I wouldn't
 mind having a catalog like that. Even without any of this extension
 template stuff, it would be handy to have a view that lists all the
 extensions available in the filesystem.

As mentioned, that's available for the filesystem-based extensions.

 There should be no difference between file-based extensions and
 catalog-based extensions. It's just two different ways to install
 the same extension. The extension author doesn't need to care about
 that, it's the DBA that decides which method to use to install it.
 
 I'm going to object loudly to any proposal that doesn't meet that criteria.

Right, which is why I think this is going to *have* to exist outside of
the backend as an independent tool which can simply install an extension
through normal libpq/PG object creation method- very similar to how
extension creation already happens, except that we're being fed from a
PG connection instead of reading in an SQL file from the filesystem.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
  There should be no difference between file-based extensions and
  catalog-based extensions. It's just two different ways to install the same
  extension. The extension author doesn't need to care about that, it's the
  DBA that decides which method to use to install it.
 
 Agreed.
 
  I'm going to object loudly to any proposal that doesn't meet that criteria.
 
 Please be kind enough to poin me where my current patch is drifting away
 from that criteria. What you're proposing here is what I think I have
 been implementing.

Perhaps you're seeing something down the road that I'm not, but I don't
see how what you're proposing with extension templates actually moves us
closer to this goal.

What is the next step to allow an extension pulled down from pgxn to be
installed, unchanged, into a given database?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 What is the next step to allow an extension pulled down from pgxn to be
 installed, unchanged, into a given database?

An extension packaging system.

Unchanged is not a goal, and not possible even today.

PGXN is a *source based* packaging system. You can't just install what's
in PGXN on the server's file system then CREATE EXTENSION, you have this
extra step called the “build”.

Whether you're targetting a file system template or a catalog template,
PGXN is not a complete solution, you still need to build the extension.

As I already mentionned in this thread, that's even true for SQL only
extensions today, have a look at this example:

  http://api.pgxn.org/src/mimeo/mimeo-1.0.1/
  http://api.pgxn.org/src/mimeo/mimeo-1.0.1/Makefile

So even as of today, given file based extension templates and PGXN,
there's something missing. You can find different client tools to help
you there, such as pgxn_client and pex:

  http://pgxnclient.projects.pgfoundry.org/
  https://github.com/petere/pex

What I want to build is an “extension distribution” software that knows
how to prepare anything from PGXN (and other places) so that it's fully
ready for being used in the database. Then the main client would run as
a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
prepared extension for you and make it available, then leaving the main
command operate as intended.

Which is what I think the pex extension is doing, and that's not
coincidental, but it runs the build step on the PostgreSQL server itself
and needs to have a non-trivial set of file-system privileges to be
doing so, and even needs to get root privileges with sudo for some of
its operations.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  *That doesn't make this approach the right one*.  If anything, I'm
  afraid we've ended up building ourselves a rube goldberg machine because
  of this constant struggle to fit a square peg into a round hole.
 
 This duplication you're talking about only applies to CREATE EXTENSION.
 
 I don't know of any ways to implement ALTER EXTENSION … UPDATE …
 behaviour without a separate set of scripts to apply in a certain order
 depending on the current and target versions of the extension.

We've already got it in the form of how filesystem extensions work
today..

 If you know how to enable a DBA to update a set of objects in a database
 only with information already found in the database, and in a way that
 this information is actually *not* an SQL script, I'm all ears.

Clearly we need the information from the extension package (the scripts
which are on the PG server's filesystem today, but need not be in the
future) but that doesn't mean we need to keep those text blobs in the
catalog.

  That's basically what we already do with schemas today and hence is
  pretty darn close to what I'm proposing.  Perhaps it'd be a way to
  simply version schemas themselves- heck, with that, we could even
  provide that oft-asked-for schema delta tool in-core by being able to
  deduce the differences between schema at version X and schema at
  version Y.
 
 Given that at any moment you have a single version of the schema
 installed, I don't know how you're supposed to be able to do that?

*I am not trying to rebuild the entire extension package from the PG
catalog*.  I do not see the need to do so either.  Perhaps that's
short-sighted of me, but I don't think so; to go back to my dpkg
example, we don't store the source package in dpkg's database nor do
people generally feel the need to rebuild .deb's from the files which
are out on the filesystem (a non-trivial task though I suppose it might
be possible to do- but not for *every version* of the package..).

 Maybe you mean by tracking the changes at update time? Well that at
 least would be a good incentive to have Command String access in event
 triggers, I guess.

I don't see the need to track the changes at all.  We don't actually
track them in the database anywhere today...  We happen to have scripts
available on the filesystem which allow us to move between versions, but
they're entirely outside the catalog and that's where they belong.

  Having a management system for sets of objects is a *great* idea- and
  one which we already have through schemas.  What we don't have is any
  kind of versioning system built-in or other metadata about it, nor do we
  have good tooling which leverages such a versioning or similar system.
 
 Exactly.
 
 How can we implement ALTER OBJECT … UPDATE TO VERSION without having
 access to some SQL scripts?
 
 The current patch offers a way to manage those scripts and apply them,
 with the idea that the people managing the scripts (extension authors)
 and the people applying them (DBAs) are not going to be the same people,
 and that it's then possible to have to apply more than a single script
 for a single UPDATE command.

Extension authors are not going to be issuing updates to everyone's
catalogs directly to update their templates..  That's still going to be
the DBA, or some tool the DBA runs, job.  I'm argueing that such a tool
could actually do a lot more and work outside of the PG backend but
communicate through libpq.  As I see it, you're trying to build that
tool *into* the backend and while 'extension templates' might end up
there, I don't think you're going to get your wish when it comes to
having a PG backend reach out over the internet at the request of a
normal user, ever.

 As soon as you question that, then you might come to realise the only
 difference in between file-system templates and catalog templates is our
 ability to deal with the problem, rather than the problem itself.

I really think there's a good deal more to my concerns than that. :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  What is the next step to allow an extension pulled down from pgxn to be
  installed, unchanged, into a given database?
 
 An extension packaging system.
 
 Unchanged is not a goal, and not possible even today.

I'm not convinced of that, actually, but you do raise a good point.

 PGXN is a *source based* packaging system. You can't just install what's
 in PGXN on the server's file system then CREATE EXTENSION, you have this
 extra step called the “build”.

Fine- so we need a step that goes from 'source' to 'built'.  I don't see
that step being done in or by a PG backend process.  Adding a new option
which can take a pgxn source and build a script from it which can be run
against PG via libpq is what I'd be going for- but that script *just
installs (or perhaps upgrades) the extension.*  There's no need for that
script, or various upgrade/downgrade/whatever scripts, to be sucked
wholesale into the PG catalog.

 What I want to build is an “extension distribution” software that knows
 how to prepare anything from PGXN (and other places) so that it's fully
 ready for being used in the database. Then the main client would run as
 a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
 prepared extension for you and make it available, then leaving the main
 command operate as intended.

I really don't think that's a good approach.

 Which is what I think the pex extension is doing, and that's not
 coincidental, but it runs the build step on the PostgreSQL server itself
 and needs to have a non-trivial set of file-system privileges to be
 doing so, and even needs to get root privileges with sudo for some of
 its operations.

pex is an interesting beginning to this, but we'd need *some* backend
support for being able to install the extension via libpq (or pex would
need to be modified to not actually use our extension framework at
all for 'trusted' extensions...).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Heikki Linnakangas

On 12/02/2013 04:14 PM, Dimitri Fontaine wrote:

Stephen Frost sfr...@snowman.net writes:

What is the next step to allow an extension pulled down from pgxn to be
installed, unchanged, into a given database?


An extension packaging system.

Unchanged is not a goal, and not possible even today.

PGXN is a *source based* packaging system. You can't just install what's
in PGXN on the server's file system then CREATE EXTENSION, you have this
extra step called the “build”.

Whether you're targetting a file system template or a catalog template,
PGXN is not a complete solution, you still need to build the extension.


So? Just make; make install and you're done. Or apt-get install foo.


What I want to build is an “extension distribution” software that knows
how to prepare anything from PGXN (and other places) so that it's fully
ready for being used in the database.


You mean, something to replace make install if it's not installed on 
the server? Fair enough. You could probably write a little perl script 
to parse simple Makefiles that only copy a few static files in place. Or 
add a flag to the control file indicating that the extension follows a 
standard layout, and doesn't need a make step.


I fear we're wandering off the point again. So let me repeat: It must be 
possible to install the same extension the way you do today, and using 
the new mechanism.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 Clearly we need the information from the extension package (the scripts
 which are on the PG server's filesystem today, but need not be in the
 future) but that doesn't mean we need to keep those text blobs in the
 catalog.

So, I guess it would have been good to hear about that about a year ago:

  http://www.postgresql.org/message-id/13481.1354743...@sss.pgh.pa.us
  http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us

We could have CREATE TEMPLATE FOR EXTENSION store the scripts into some
files in PGDATA instead of the catalogs, but really I don't see the
point.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 Fine- so we need a step that goes from 'source' to 'built'.  I don't see
 that step being done in or by a PG backend process.  Adding a new option
 which can take a pgxn source and build a script from it which can be run
 against PG via libpq is what I'd be going for- but that script *just
 installs (or perhaps upgrades) the extension.*  There's no need for that
 script, or various upgrade/downgrade/whatever scripts, to be sucked
 wholesale into the PG catalog.

As you said previously, we can't ask extension authors to control what
version of their extension is installed on which database, so we need a
way to cooperate with the backend in order to know how to operate the
update.

We can't just pull data out of the backend to do that, not until we've
been pushing the list of available versions and update scripts that we
have to be able to run the update.

That's were I though about pushing the whole thing down to the catalogs
and have the backend take control from there.

 What I want to build is an “extension distribution” software that knows
 how to prepare anything from PGXN (and other places) so that it's fully
 ready for being used in the database. Then the main client would run as
 a CREATE EXTENSION ddl_command_start Event Trigger and would fetch the
 prepared extension for you and make it available, then leaving the main
 command operate as intended.

 I really don't think that's a good approach.

What's your alternative? Goals are:

  - using the update abilities of the extension mechanism
  - no access to the server's file system needed
  - pg_restore does the right thing

I went for the whole set of extension abilities in my patch, you're
pushing hard for me to reduce that goal so I only included the ability
to manage version upgrades here.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Heikki Linnakangas hlinnakan...@vmware.com writes:
 I fear we're wandering off the point again. So let me repeat: It must be
 possible to install the same extension the way you do today, and using the
 new mechanism.

The way you do today is running make install or apt-get install or
something else to write files in the right place on the file system,
usually with root privileges.

The new mechanism tries to avoid using the file system *completely*.

Sorry. I don't understand what you mean other that “I don't want this
patch because I don't understand what it is about”.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Then as soon as we are able to CREATE EXTENSION mystuff; without ever
 pre-installing files on the file system as root, then we would like to
 be able to do just that even with binary modules.

 I really just don't see this as being either particularly useful nor
 feasible within a reasonable amount of effort.  Shared libraries are
 really the perview of the OS packaging system.

Yes, exactly.  What's more, you're going to face huge push-back from
vendors who are concerned about security (which is most of them).
If there were such a feature, it would end up disabled, one way or
another, in a large fraction of installations.  That would make it
impractical to use anyway for most extension authors.  I don't think
it's good project policy to fragment the user base that way.

I'm on board with the notion of an all-in-the-database extension
mechanism for extensions that consist solely of SQL objects.  But
not for ones that need a .so somewhere.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Stephen Frost sfr...@snowman.net writes:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Then as soon as we are able to CREATE EXTENSION mystuff; without ever
 pre-installing files on the file system as root, then we would like to
 be able to do just that even with binary modules.

 I really just don't see this as being either particularly useful nor
 feasible within a reasonable amount of effort.  Shared libraries are
 really the perview of the OS packaging system.

 Yes, exactly.  What's more, you're going to face huge push-back from
 vendors who are concerned about security (which is most of them).

Last time I talked with vendors, they were working in the Open Shift
team at Red Hat, and they actually asked me to offer them the ability
you're refusing, to let them enable a better security model.

The way they use cgroups and SELinux means that they want to be able to
load shared binaries from system user places.

 If there were such a feature, it would end up disabled, one way or
 another, in a large fraction of installations.  That would make it
 impractical to use anyway for most extension authors.  I don't think
 it's good project policy to fragment the user base that way.

That point about fragmentation is a concern I share.

 I'm on board with the notion of an all-in-the-database extension
 mechanism for extensions that consist solely of SQL objects.  But
 not for ones that need a .so somewhere.

Thanks for restating your position.

The current patch offers a feature that only works with SQL objects,
it's currently completely useless as soon as there's a .so involved.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  Clearly we need the information from the extension package (the scripts
  which are on the PG server's filesystem today, but need not be in the
  future) but that doesn't mean we need to keep those text blobs in the
  catalog.
 
 So, I guess it would have been good to hear about that about a year ago:
 
   http://www.postgresql.org/message-id/13481.1354743...@sss.pgh.pa.us
   http://www.postgresql.org/message-id/6466.1354817...@sss.pgh.pa.us
 
 We could have CREATE TEMPLATE FOR EXTENSION store the scripts into some
 files in PGDATA instead of the catalogs, but really I don't see the
 point.

Yeah, I don't particularly like that idea either, but especially if it's
going to be per-database again.  I can kinda, sorta see the point if
this was done cluster-wide but you don't like that idea and I'm not a
big fan of pushing these files out onto the filesystem anyway.

What I don't entirely follow is the argument against having
non-file-backed extensions be dump'd through pg_dump/restore.  Even in
that thread, Tom appears to agree that they'd have to be dumped out in
some fashion, even if they're stored as files under PGDATA, because
otherwise you're not going to be able to restore the DB..

On the other hand, I can appreciate the concern that we don't really
want a dump/restore to include the extension definition when it's
already on the filesystem.  That said, it amazes me that we don't
include the version # of the extension in pg_dump's 'CREATE EXTENSION'
command..  How is that not a problem?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  Fine- so we need a step that goes from 'source' to 'built'.  I don't see
  that step being done in or by a PG backend process.  Adding a new option
  which can take a pgxn source and build a script from it which can be run
  against PG via libpq is what I'd be going for- but that script *just
  installs (or perhaps upgrades) the extension.*  There's no need for that
  script, or various upgrade/downgrade/whatever scripts, to be sucked
  wholesale into the PG catalog.
 
 As you said previously, we can't ask extension authors to control what
 version of their extension is installed on which database, so we need a
 way to cooperate with the backend in order to know how to operate the
 update.

Sure, that sounds reasonable..

 We can't just pull data out of the backend to do that, not until we've
 been pushing the list of available versions and update scripts that we
 have to be able to run the update.

I'm not following this, nor why we need this master list of every
extension which exists in the world to be in every PG catalog in every
database out there.

 That's were I though about pushing the whole thing down to the catalogs
 and have the backend take control from there.

I can appreciate the desire to do that but this particular piece really
feels like it could be done better external to the backend.  To go back
to my OS example, I feel Debian is better off with apt-get/aptitude
being independent from dpkg itself.

 What's your alternative? Goals are:
 
   - using the update abilities of the extension mechanism
   - no access to the server's file system needed
   - pg_restore does the right thing
 
 I went for the whole set of extension abilities in my patch, you're
 pushing hard for me to reduce that goal so I only included the ability
 to manage version upgrades here.

I'd like to see these goals met, I just don't see it being all done in C
in the PG backend.  I've tried to outline my thoughts about how we
should keep the actual extension creation scripts, upgrade scripts, etc,
out of the backend catalogs (and not on the filesystem either..) and let
those be managed externally, but that does then require that when we
actually dump the extension's objects instead of just 
'CREATE EXTENSION blah;'.  I understand there have been objections
raised to that, but I wonder if that isn't mainly because we're calling
these new things extensions which have this built-in notion that
they're coming from an OS packaging system which installs files
somewhere..  We certainly have none of these qualms about dumping and
restoring all the objects in a given schema.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 On the other hand, I can appreciate the concern that we don't really
 want a dump/restore to include the extension definition when it's
 already on the filesystem.  That said, it amazes me that we don't
 include the version # of the extension in pg_dump's 'CREATE EXTENSION'
 command..  How is that not a problem?

Including the version number would be a problem.

When you install PostgreSQL 9.1, you only have hstore 1.0.
When you install PostgreSQL 9.2, you only have hstore 1.1.
When you install PostgreSQL 9.3, you only have hstore 1.2.

  
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/hstore/hstore.control;hb=refs/heads/REL9_1_STABLE
  
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/hstore/hstore.control;hb=refs/heads/REL9_2_STABLE
  
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/hstore/hstore.control;hb=refs/heads/REL9_3_STABLE

We should maybe add the extension's version number in our documentation
pages, such as the following:

  http://www.postgresql.org/docs/9.3/interactive/hstore.html

So when you pg_dump | pg_restore from 9.1 into 9.3, if pg_dump were to
be nitpicky about the version of hstore with the command

  CREATE EXTENSION hstore VERSION '1.0';

What would happen is that pg_restore would fail.

That's just the way we maintain contribs.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-12-02 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Tom Lane t...@sss.pgh.pa.us writes:
  Yes, exactly.  What's more, you're going to face huge push-back from
  vendors who are concerned about security (which is most of them).
 
 Last time I talked with vendors, they were working in the Open Shift
 team at Red Hat, and they actually asked me to offer them the ability
 you're refusing, to let them enable a better security model.
 
 The way they use cgroups and SELinux means that they want to be able to
 load shared binaries from system user places.

As I've pointed out before, I'd really like to hear exactly how these
individuals are using SELinux and why they feel this is an acceptable
approach.  The only use-case that this model fits is where you don't
have *any* access control in the database itself and everyone might as
well be a superuser.  Then, sure, SELinux can prevent your personal PG
environment from destroying the others on the system in much the same
way that a chroot can help there, but most folks who are looking at MAC
would view *any* database as an independent object system which needs to
*hook into* an SELinux or similar.

In other words, I really don't think we should be encouraging this
approach and certainly not without more understanding of what they're
doing here.  Perhaps they have a use-case for it, but it might be better
done through 'adminpack' or something similar than what we support in
core.

Thanks,

Stephen


signature.asc
Description: Digital signature


  1   2   >