I have a fairly basic cluster setup, with several nodes. On 3 of them I 
wanted to start my application running an embedded OrientDB server (1.6.4). 
The database is created only on one, though. Others are supposed to get it 
automatically from it. The flow is like this:

   1. start up all 3 apps simultaneously
   2. number 1 starts normally as it has the DB
   3. numbers 2 and 3 see there is nothing in the cluster yet, so they 
   don't poll the DB from it. They also don't have the DB locally so they 
   retry to create an OrientDB server until there is an instance with a DB in 
   the cluster
   4. when number 1 is fully started numbers 2 and 3 get the distributed 
   config from the cluster (which includes *only* node number 1 in the 
   partitions list) and the DB from number 1
   5. number 2 and 3 both update the distributed config they have, so 
   number2 has a config with number 1 and 2 as partitions, number3 has one 
   with 1 and 3. They both try to send their updated versions to the cluster.
   6. the cluster configuration is updated twice, but in both cases it is 
   simply overridden! So in the end we have a distributed config with only 2 
   nodes in the partitions list. To be more precise with node number1 and the 
   node which joined as last.

This leaves us with an incorrect distributed config, which from what I saw 
is used while replicating data between nodes.  All works well when I start 
them sequentially.

Is there a way to actually merge the node list instead of simply overriding 
it?

Regards,
Mateusz

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to