I updated to 2.0 SNAPSHOT and I'm having a bit of trouble:
Number of nodes: 3
Version: all running the latest 2.0 SNAPSHOT build (12.12.14)
My distributed config is very basic:
{
"autoDeploy": true,
"hotAlignment": false,
"executionMode": "synchronous",
"readQuorum": 1,
"writeQuorum": 2,
"failureAvailableNodesLessQuorum": false,
"readYourWrites": true,
"clusters": {
"internal": {
},
"index": {
},
"ODistributedConflict": {
},
"*": {
"servers": [ "<NEW_NODE>" ]
}
}
}
During boot time new nodes keep on creating local clusters:
For instance:
2014-12-12 11:30:11.496 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin - [
database2] class blob, creation of new local cluster 'blob_database2' (id=-1
)
2014-12-12 11:30:11.805 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin - [
database2] class blob, set mastership of cluster 'blob_database2' (id=86)
to 'database2'
Or:
2014-12-12 10:32:48.093 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin - [
database3] class blob, creation of new local cluster 'blob_database3' (id=-1
)
2014-12-12 10:32:48.389 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin - [
database3] class blob, set mastership of cluster 'blob_database3' (id=239)
to 'database3'
When the application is running it seems that each node tries to insert
things into their own local clusters, which can be problematic during *update
operations* as the engine will think that the record is not there!
For instance I have 3 clusters: #1 (local to node1), #2 (local to node2),
#3 (local to node3). If I send a new doc to node1 and save it there it will
get an ID #1:0, then if I send that document to node2 for *update* it will
check local cluster #2 and see no record to update and will try to save it
but then the indexer will throw a
com.orientechnologies.orient.core.storage.ORecordDuplicatedException: Cannot
index record #68:0: found duplicated key 'test in index 'testIdx'
previously assigned to the record #143:0
When I check the state of clusters from the console they are also
completely different:
orientdb {db=testOrient}> clusters
CLUSTERS
-----------------------------------------+-------+-------------------+----------------+
NAME | ID | CONFLICT STRATEGY |
RECORDS |
-----------------------------------------+-------+-------------------+----------------+
test | 76 | |
11 |
test_database02 | 140 | |
4537 |
test_database03 | 174 | |
4006 |
-----------------------------------------+-------+-------------------+----------------+
TOTAL = 3 |
8554 |
-----------------------------------------+-------+-------------------+----------------+
Is this a bug in 2.0 or am I miss configuring something? Might it be that
other nodes don't get notified about the mastership change and every node
is trying to save only in their respective local clusters?
Mateusz
--
---
You received this message because you are subscribed to the Google Groups
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.