Ah also I still have some doubts about this new architecture, let's take
the following example:
Node1 with local cluster test_node1 #1,
Node2 with local cluster test_node2 #2
The TEST record has a unique constraint on a field called "name"
1) Send concurrent requests with the same TEST record to both nodes
2) Node1 saves it as #1:0 locally and distributes to Node2 the change
3) Node2 saves it as #2:0 locally and distributes to Node1 the change
4) Node2 gets the replication task from Node1, tries to put it into his
table but there's already such an entry in the index (the #2:0 one)
5) Same thing happens on Node1 with the replication task from Node1
What in that case? None of them go through? What's in the DB?
This isn't really a standard way to implement a master-replica type of
architecture... In this kind of arch you should have a master per cluster
(cluster in orientdb's sense) and route all writes for that cluster to the
master. It's a bit slower but that's the only way you can actually get rid
of inconsistencies.
Mateusz
On Friday, December 12, 2014 4:25:23 PM UTC+9, Mateusz Dymczyk wrote:
>
> Ok so I did a bit more testing and now I see how it's all working - the
> clusters are all in synch (data is being replicated) and a node is always
> inserting to it's local cluster.
>
> Is there a way to disable those localities?? I mean I don't want N
> clusters per class where N is the number of nodes as I'm relying on
> Orient's ID generation and if I have more than 1 cluster then that means I
> can have multiple records (up to N) with the same cluster position!!! This
> is a really undesired, breaking change and I haven't seen it documented
> anywhere...
>
> Mateusz
>
> On Friday, December 12, 2014 2:27:55 PM UTC+9, Mateusz Dymczyk wrote:
>>
>> I updated to 2.0 SNAPSHOT and I'm having a bit of trouble:
>>
>> Number of nodes: 3
>> Version: all running the latest 2.0 SNAPSHOT build (12.12.14)
>>
>> My distributed config is very basic:
>>
>> {
>> "autoDeploy": true,
>> "hotAlignment": false,
>> "executionMode": "synchronous",
>> "readQuorum": 1,
>> "writeQuorum": 2,
>> "failureAvailableNodesLessQuorum": false,
>> "readYourWrites": true,
>> "clusters": {
>> "internal": {
>> },
>> "index": {
>> },
>> "ODistributedConflict": {
>> },
>> "*": {
>> "servers": [ "<NEW_NODE>" ]
>> }
>> }
>> }
>>
>> During boot time new nodes keep on creating local clusters:
>>
>> For instance:
>>
>> 2014-12-12 11:30:11.496 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin -
>> [database2] class blob, creation of new local cluster 'blob_database2' (
>> id=-1)
>> 2014-12-12 11:30:11.805 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin -
>> [database2] class blob, set mastership of cluster 'blob_database2' (id=86
>> ) to 'database2'
>>
>> Or:
>>
>> 2014-12-12 10:32:48.093 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin -
>> [database3] class blob, creation of new local cluster 'blob_database3' (
>> id=-1)
>> 2014-12-12 10:32:48.389 [main] INFO c.o.o.s.hazelcast.OHazelcastPlugin -
>> [database3] class blob, set mastership of cluster 'blob_database3' (id=
>> 239) to 'database3'
>>
>> When the application is running it seems that each node tries to insert
>> things into their own local clusters, which can be problematic during
>> *update
>> operations* as the engine will think that the record is not there!
>>
>> For instance I have 3 clusters: #1 (local to node1), #2 (local to node2),
>> #3 (local to node3). If I send a new doc to node1 and save it there it will
>> get an ID #1:0, then if I send that document to node2 for *update* it
>> will check local cluster #2 and see no record to update and will try to
>> save it but then the indexer will throw a
>> com.orientechnologies.orient.core.storage.ORecordDuplicatedException:
>> Cannot index record #68:0: found duplicated key 'test in index 'testIdx'
>> previously assigned to the record #143:0
>>
>> When I check the state of clusters from the console they are also
>> completely different:
>>
>> orientdb {db=testOrient}> clusters
>>
>> CLUSTERS
>>
>> -----------------------------------------+-------+-------------------+----------------+
>> NAME | ID | CONFLICT STRATEGY |
>> RECORDS |
>>
>> -----------------------------------------+-------+-------------------+----------------+
>> test | 76 | |
>> 11 |
>> test_database02 | 140 | |
>> 4537 |
>> test_database03 | 174 | |
>> 4006 |
>>
>> -----------------------------------------+-------+-------------------+----------------+
>> TOTAL = 3 |
>> 8554 |
>>
>> -----------------------------------------+-------+-------------------+----------------+
>>
>> Is this a bug in 2.0 or am I miss configuring something? Might it be that
>> other nodes don't get notified about the mastership change and every node
>> is trying to save only in their respective local clusters?
>>
>> Mateusz
>>
>>
>
--
---
You received this message because you are subscribed to the Google Groups
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.