Hey all,
Have no problem getting a master->slave replication working, however I
have run into something I can't seem to resolve:
Here's the list of commands I used:
for a in 1 2 3 4 ; do slonik_store_node $a | slonik ; done
slonik_init_cluster | slonik
sudo slon_start 1
sudo slon_start 2
sudo slon_start 3
sudo slon_start 4
slonik_create_set 1 | slonik
slonik_subscribe_set 1 2 | slonik
slonik_subscribe_set 1 3 | slonik
slonik_subscribe_set 1 4 | slonik
My configuration is as follows:
[u...@zyklon /var/log/slony/slony1/node3]$ cat
/usr/local/etc/slon_tools.conf
# $Id: slon_tools.conf-sample,v 1.7 2005-11-15 18:09:59 cbbrowne Exp $
if ($ENV{"SLONYNODES"}) {
require $ENV{"SLONYNODES"};
} else {
# The name of the replication cluster. This will be used to
# create a schema named _$CLUSTER_NAME in the database which will
# contain Slony-related data.
$CLUSTER_NAME = 'replication';
$LOGDIR = '/var/log/slony';
# $APACHE_ROTATOR = '/usr/local/apache/bin/rotatelogs';
# $SYNC_CHECK_INTERVAL = 1000;
$MASTERNODE = 1;
add_node(node => 1,
host => 'invertigo.domain',
dbname => 'db',
port => 5432,
user => 'pgsql',
password => '****');
add_node(node => 2,
host => 'zyklon.domain',
dbname => 'db',
port => 5432,
user => 'pgsql',
password => '****');
add_node(node => 3,
host => 'tornado.domain',
dbname => 'db',
port => 5432,
user => 'postgres',
password => '****');
add_node(node => 4,
host => 'knightmare.domain',
dbname => 'db',
port => 5432,
user => 'pgsql',
password => '****');
}
$SLONY_SETS = {
"set1" => {
"set_id" => 1,
# "origin" => 1,
# foldCase => 0,
"table_id" => 1,
"sequence_id" => 1,
"pkeyedtables" => [ 'public.account2email',
'public.actions',
'public.audit',
'public.authorisation',
'public.captcha',
'public.comments',
'public.emails',
'public.evid2comm',
'public.evid2raw',
'public.evid2rt',
'public.evidence',
'public.hosts',
'public.hosts2evid',
'public.matviews',
'public.most_recent_hosts_cached',
'public.most_recent_nets_cached',
'public.netassignments',
'public.netnames',
'public.netowners',
'public.nets2evid',
'public.nets2hosts',
'public.nets2nets',
'public.networks',
'public.notifications',
'public.permission',
'public.rawevidence',
'public.rdnsqueue',
'public.reportq',
'public.roles',
'public.sessions',
'public.statistics',
'public.useraccounts',
'public.usernets',
'public.userprefs',
'public.users',
'public.whois',
'public.whois2name',
'public.whoisnames',
'public.whoisobjects',
'public.wnetassignments'
],
"keyedtables" => { },
"serialtables" => ['public.dns',
'public.mx_routes',
'public.userpermissions'
],
# Sequences that need to be replicated should be entered here.
"sequences" => ['public.audit_pk_seq',
'public.captcha_id_seq',
'public.evidence_evidid_seq',
'public.hosts_hostsid_seq',
'public.netnames_nameid_seq',
'public.networks_netid_seq',
'public.notifications_pk_seq',
'public.rawevidence_rawid_seq',
'public.roles_id_seq',
'public.useraccounts_sid_seq',
'public.usernets_id_seq',
'public.userpermissions_id_seq',
'public.users_id_seq'
],
},
};
if ($ENV{"SLONYSET"}) {
require $ENV{"SLONYSET"};
}
# Please do not add or change anything below this point.
1;
Node 1 seems to replicate to node 2 with no issues, however in nodes 3
and 4 I see this in the log:
2009-12-05 02:09:27 EST ERROR remoteWorkerThread_1: "select
"_replication".setAddSequence_int(1, 1, '"public"."audit_pk_seq"',
'Sequence public.audit_pk_seq')" PGRES_FATAL_ERROR ERROR: Slony-I:
setAddSequence_int(): sequence ID 1 has already been assigned
The docs indicate it's something I have done, so I rechecked the config
and re-initialised everything (except the data in the master node) ..
used 'slonik_uninstall_nodes' etc... re-did everything as above, and I
still get the same error...! Please give me a hint on what I screwed up
or what's wrong...
Thanks.
Michelle
PS: one host is running all instances of slon atm, and I want
(eventually) to have replication from 1 to 3 then from 3 to 2 and 4
eventually (3 is a dedicated "hub" server on the same network as 1 and
2. 1 is the write node and 2 is a highspeed access node as is the
remote node 4. When I have this working there will also be nodes 5 and
6 which should be connected to 3 and will also be on remote networks,
hence the desire for a hub.)
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general