* Jeff Davis ( wrote:
> I see where you're coming from, but after some thought, and looking at
> the patch, I think we really do want a catalog representation for (at
> least some) extensions.

Perhaps I'm missing something- but we already *have* a catalog
representation for every extension that's ever installed into a given
database.  A representation that's a heck of a lot better than a big
text blob.

> Dealing with files is a conceptual mismatch that will never be as easy
> and coherent as something that database manages and understands.
> Replication, backup, and Postgres-as-a-service providers are clear
> examples, and I just don't see a file-based approach solving those
> problems.


> But bringing more of an extension into the catalog can be done, and I
> think we'll see big benefits from that.

I'm not following here- what's 'missing'?

> Imagine something like (this comes from an in-person conversation with
> Dimitri a while ago; hopefully I'm not misrepresenting his vision):
>   =# select pgxn_install_template('myextension');
> or even:
>   =# select pgxn_update_all_templates();
> That is much closer to what modern language environments do -- ruby,
> python, go, and haskell all have a language-managed extension service
> independent of the OS packaging system and don't require more privileges
> or access than running the language.

I like the general idea, but I don't particularly see the need for the
backend PG process to be making connections to these external
repositories and pulling down files to execute.  That could be done just
as simply by another process which works with the PG backend- ala how
dpkg and aptitude work together.

> That being said, there some things about in-catalog templates that need
> some more thought:
>   1. If someone does want their OS to install extensions for them (e.g.
> the contrib package), how should that be done? This usually works fine
> with the aforementioned languages, because installation is still just
> dropping files in the right place. Postgres is different, because to put
> something in the catalog, we need a running server, which is awkward for
> a packaging system to do.

You need a running PG for the *extension* to be installed, but with the
filesystem-based extension approach we have today, the "template" (which
are the files on the filesystem) don't need PG running, and if we had an
external tool which could work with the PG backend to install extensions
via libpq, just like the backend works with the files on the filesystem,
we wouldn't have this issue of bootstrapping the 'extension template'
into the catalog.

>   2. When 9.4 gets released, we need some solid advice for extension
> authors. If they have a native shared library, I assume we just tell
> them to keep using the file-based templates. But if they have a SQL-only
> extension, do we tell them to port to the in-catalog templates? What if
> they port to in-catalog templates, and then decide they just want to
> optimize one function by writing it in native code? Do they have to port
> back? What should the authors of SQL-only extensions distribute on PGXN?
> Should there be a migration period where they offer both kinds of
> templates until they drop support for 9.3?

This is one of the main things that I think Heikki was trying to drive
at with his comment- we really don't *want* to make extension authors
have to do anything different than what they do today.  With an external
tool, they wouldn't need to and it would just be two different ways for
an extension to be installed into a given database.  In the end though,
if we're telling people to 'port' their extensions, then I think we've
already lost.

>     a. Some extensions have quite a few .sql files. It seems awkward to
> just cat them all into one giant SQL query. Not a rational problem, but
> it would bother me a little to tell people to squash their
> otherwise-organized functions into a giant blob.

'awkward' isn't the word I'd use, it's downright horrible.

>   3. What do we do about native shared libraries? Ultimately, I imagine
> that we should handle these similarly to tablespaces: have a real
> database object with an OID that extensions or functions can depend on,
> and create a symlink (with the OID as the link name) that points to the
> real file on disk. We could also export some new symbols like the shared
> library name and version for better error checking.

I'm sorry, but I do not see shared libraries working through this
system, at all.  I know that goes against what Dimitri and some others
want, but I've talked with a few folks (such as Paul Ramsey of PostGIS)
about this notion and, from that perspective, it's almost laughable to
think we could ship shared libraries in this way.  Even if we could
convince ourselves that there's some way for us to track the files on
the filesystem and work out all the per-database and whatever issues are
associated with that, it'd only work for the simplest shared libraries
which don't have any dependencies on other libraries on the system
(excepting, perhaps, libc6) and that narrows the use-case down
significantly, to the point where I don't feel it's worth all that

>   4. Do we live with both file-based and catalog-based templates
> forever? I guess probably so, because the file-based templates probably
> are a little better for contrib itself (because the complaints about
> relying on OS packaging don't apply as strongly, if at all).

We need the file-based extension "templates" because they deal with
shared libraries and because we've got them already.  I don't think we
actually need to keep track of the specific commands used to define a
catalog-bassed extension inside the catalog, so I'm advocating for there
simply not being any "catalog-based template" system for extensions.
Extensions themselves are already "catalog-based".



Attachment: signature.asc
Description: Digital signature

Reply via email to