"Diogo Biazus" <[EMAIL PROTECTED]> writes:

> I exposed the idea of bringing the xlogdump functionality to a backend
> module. The main drawback is the use case where the database is down. But
> the access to a failed cluster isn't impossible, just a little bit more
> dificult, requiring another cluster to be initialized.

Does that mean you're planning to not use the backend's system tables at all?
You'll look at the database cluster under analysis to get all that

If so then that removes a lot of the objections to running in a backend.
You're basically just using the backend as a convenient context for
manipulating and storing table-like data.

It also seems to remove a lot of the motivation for doing it in the backend.
You're not going to get any advantages on the implementation side in that

> - I already have a database connection in cases where I want to translate
> oid to names.

You can't do that if you want to allow people to initialize a new cluster to
analyze a downed cluster.

> - I can connect directly to the postgresql server if I want to query xlogs
> in a remote machine (don't need remote access to the system).
> - Easier to integrate with existing admin tools, like PgAdmin.

These are unconvincing to non-windows people. In any case a stand-alone
program could always have a postgres module tacked on to call out to it.

That's the main reason I think a stand-alone module makes more sense. You can
always take a stand-alone module and stick an interface to it into the server.
You can't take code meant to run in the server and build a stand-alone
environment to run it.


---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to