Alex J. Avriette wrote:
On Fri, Mar 05, 2004 at 12:47:23AM +0100, Jochem van Dieten wrote:

>I personally don't think that a GUI tool should be the province of the >Slony project. Seriously. I think that Slony should focus on a

I very much agree with this, but this is Jan's baby, so I didn't say anything. I have personally never used a GUI with a postgres database (well, okay, I used one for a bit to troubleshoot a problem my boss was having with a pg node once), and I don't really plan to. I guess I was unaware this is a common usage pattern.

I was explicitly asking for opinions and input. I don't want this to be "my baby". In the end I am a developer, not a DBA. I know how to do it, but don't have the ultimate wisdom about how to manage it.



>command-line api and catalogs, and allow the existing GUI projects to >build a slony-supporting interface.

Why a command line api? I believe it would make sense to be able to configure and control all nodes of the entire system from psql connected to any of the nodes. That would also facilitate the existing GUI projects in adding a Slony-manager.

In theory, most of the stuff that Slony is doing is within the database, and as such, could be configurable via stored procedures. I see a few problems with this.

First off, it is not possible to configure external applications (such
as erserver has a daemon) from within the database except through the
modification of tables within the database which are monitored by said
application.

Which is exactly the way the Slony node daemons communicate with each other and the way most of the admin activity is actually communicated into the system.


The communication channels are "event" tables. The node daemons use listen and notify to send messages from on to another. Messages are only exchanged over this when the replication cluster configuration is changed or every 10 seconds to tell "new replication data has accumulated, come and get it". So I think the listen/notify protocol suits well for that.

Some of the functionality happening on an event is already put into stored procedures, and the replication engine as well as the (to be) admin tools just call those. But that doesn't mean that using psql will do the job. There are certain operations that need to be initiated (the corresponding SP called) on a particular node, not just on any available one. Also, these stored procedures take arguments, most of which are just the ID numbers of configuration objects. Not the ideal user interface.


Second, it increases the footprint of Slony on the database. I am fairly uneasy about adding more tables, functions, and triggers to my (already quite taxed) production database. To add further functions for configuration, as well as related tables and triggers, makes my job managing the database more difficult. Additionally, those commands are queries. For something as trivial as configuration data, I would much rather not be issuing queries against an already very busy database. I am much more comfortable with the principle of external configuration files and programs.

All tables, sequences and stored procdures/functions related to the Slony replication system reside is a separate namespace. I found out lately that (without replicating sequences yet), the whole replication system can be "cleanly" removed from a database with just a DROP SCHEMA ... CASCADE.


The problem I have with external configurations is that they collide with the hot subscribe capability. If node-3 subscribes to a set from node-1, getting the data cascaded over node-2, the event to enable that subscription has to travel from 1 over 2 to 3. When that is received there, 3 has to copy over the current status of the data from 2 and then catch up by replicating all changes that have happened during this copy, which for large data sets can take a while. So node-2 must be aware of this happening and not throw away any replication log since node-3 started copying, unless it is confirmed received by 3. The knowledge that 3 exists must also cause other forwarding nodes to keep the log. Imagine that after 3 successfully copied the data, while he's catching up node-2 dies. At that moment, 3 can be reconfigured to get the rest of the log from 1, or anyone else who has it, so that the copy effort is not lost ... which at the time a node is failing in the system would just add to the pain of the DBA.


Lastly, and I may be the black sheep here, I don't find sql to be particularly useful for doing things that require a complex grammar. In this instance, I don't want to have to do something like:

production=# select slony_config_setval( 'log_dir', '/data/slony_logs');

It currently looks more like


    select "_MyCluster".storePath(2, 3, 'dbname=mydb host=node2', 30);
    select "_MyCluster".storeListen(2, 2, 3);

to manage the configuration. Obviously, this could be worse than the
above example.

So it "IS" worse! It is not supposed that the DBA uses the systems internal API for configuration management. That is the whole reason for the admin/config tools.



I don't understand the opposition to an external set of tools (even a gui if need be). It seems to me, that until the postmaster has some kind of native replication, all replication efforts will be based on external programs. As such, they should be configured externally, and be treated as any other daemon would be.

There must be some external tools. And to be integrated into any automated failover system, it needs to be commandline. So that one is a given.


That still does not give an easy way to tell which of the existing tables should be replicated, into how many independant sets they can be divided, what nodes subscribe to what sets, what nodes do store and forward of log data, all that stuff.

I have started on a small lex+yacc+libpq tool that will get me over the immediate requirements I have to work on provider change and failover. I will add that to the CVS (first as a subdirectory of ducttape) in a few days.


Jan


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #


---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend

Reply via email to