Hmm. Hard to say. I recall some prior advice on putting subscribe ops in their 
own separate scripts...
After any op, you can check the local sl_node, sl_set, sl_subscribe, sl_path to 
see that particular node's view of the universe. The path info between 
subscribers and their providers of course is important. For new nodes not yet 
setup with subscriptions, it may be most expedient to drop a problem node and 
start again (as you already have.)

Tom    :-)

From: Sung Hsin Lei <[email protected]<mailto:[email protected]>>
Date: Thursday, February 4, 2016 at 10:24 AM
To: Tom Tignor <[email protected]<mailto:[email protected]>>
Cc: slony 
<[email protected]<mailto:[email protected]>>
Subject: Re: [Slony1-general] Cannot fully drop slony node

One more question,

After I re-created node 3 and run(on replicated db):


slon slony_Securithor2 "dbname = dbNAME user = slonyuser password = slonPASS 
port = 5432"


I get:


2016-02-04 17:15:05 GTB Standard Time FATAL  main: Node is not initialized prope
rly - sleep 10s


slon then stops after 10 seconds. Any idea what happened?

Thanks again.

On Thu, Feb 4, 2016 at 9:48 AM, Sung Hsin Lei 
<[email protected]<mailto:[email protected]>> wrote:
yes... that's it!!

On Thu, Feb 4, 2016 at 8:58 AM, Tignor, Tom 
<[email protected]<mailto:[email protected]>> wrote:

If I'm reading right, did you run the drop node op at some point on node 1 and 
see it succeed? If it did, the sl_node table on each other node in the cluster 
(save perhaps node 3) should show it gone.
If that's the case, your cluster is fine and you can just run 'DROP SCHEMA 
mycluster CASCADE' on node 3 and then retry your store node script.

Tom    :-)


From: Sung Hsin Lei <[email protected]<mailto:[email protected]>>
Date: Wednesday, February 3, 2016 at 11:37 PM
To: slony 
<[email protected]<mailto:[email protected]>>
Subject: [Slony1-general] Cannot fully drop slony node

Hey guys,

I have a cluster with 3 nodes. On the main db, I run the following script:


cluster name = slony_cluster;

node 1 admin conninfo = 'dbname = dbNAME host = localhost user = slonyuser 
password = slonPASS port = 5432';
node 3 admin conninfo = 'dbname = dbNAME host = 172.16.10.4 user = slonyuser 
password = slonPASS port = 5432';

DROP NODE ( ID = 3, EVENT NODE = 1 );



I open pdadmin on the main db and I don't see node 3 anymore. However, when I 
open pgadmin on the replicated db, I still see node 3. The replicated db is the 
one associated with node 3. I run the above script again on the replicated db 
but get the following error:


C:\Program Files\PostgreSQL\9.3\bin>slonik drop.txt
debug: waiting for 3,5000000004 on 1
drop.txt:4: PGRES_FATAL_ERROR lock table "_slony_securithor2".sl_event_lock, "_s
lony_cluster".sl_config_lock;select "_slony_securithor2".dropNode(ARRAY[3]);
  - ERROR:  Slony-I: DROP_NODE cannot initiate on the dropped node


Now I need to setup another node which must have id=3. I run a script on the 
main db(the one pgadmin does not show a node 3). The following is the script 
that I used to setup the node and the error that I get:


cluster name = slony_cluster;

node 1 admin conninfo = 'dbname = dbNAME host = localhost user = slonyuser 
password = slonPASS port = 5432';
node 3 admin conninfo = 'dbname = dbNAME host = 172.16.10.4 user = slonyuser 
password = slonPASS port = 5432';

store node (id=3, comment = 'Slave node 3', event node=1);
store path (server = 1, client = 3, conninfo='dbname=dbNAME host=172.16.10.3 
user=slonyuser password = slonPASS port = 5432');
store path (server = 3, client = 1, conninfo='dbname=dbNAME host=172.16.10.4 
user=slonyuser password = slonPASS port = 5432');

subscribe set ( id = 1, provider = 1, receiver = 3, forward = no);





C:\Program Files\PostgreSQL\9.3\bin>slonik create.txt
drop.txt:6: Error: namespace "_slony_cluster" already exists in database of
node 3



Is there another way to drop nodes? Can I recover from this without dropping 
the cluster and restarting from scratch?


Thanks.


_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to