Re: [Neo4j] Node Id generation deadlock
On Nov 2, 2011, at 13:33 , Balazs E. Pataki wrote: Hi, I had a similar issue (also with ID generation), and I would be also interested in a solution, or how synchronizations should be done to avoid deadlocks like this in the transaction. Have you considered using IDs that can be generated without consulting the database, or even in-VM synchronization? E.g. UUID, GUID or VMID. ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
Re: [Neo4j] Replication corner cases?
On Aug 23, 2011, at 17:30 , Mattias Persson wrote: Hm, actually client X can't read anything touched by T from master, since slave A will have taken write locks on things it modifies, and the write locks are associated with T that never finishes in this example. Still, master's state will diverge from cluster state. It's ok to read things that are held by write locks, reads will not block. Hm, so to have safe replication with the scheme I described, readers would also need to take read locks on the items they read even if they don't plan to update anything based on the results. Then again, if read locks are taken like that, the client may as well read from any slave, since read lock causes state to be synchronized from master. ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
Re: [Neo4j] HA consistency
On Aug 19, 2011, at 07:57 , David Rader wrote: It looks like the HA implementation is for eventual consistency, tunable by how often a slave polls the master for updates from other nodes. With a load balanced cluster, is the best practice to simply use sticky sessions on clients to ensure that immediate reads of updated data are served by the same node that wrote the update and are therefore consistent? Any other recommended approaches? If your goal is HA, there are two other approaches: 1) Always read from master and 2) Always take read lock on things you read Always reading from master works because writes are synchronously replicated to master, and taking a read lock works because taking a read lock always synchronizes with master (although it of course also disallows related writes for the duration of your transaction). These solutions affect write performance (reading from master consumes master capacity, and taking read locks prevents other transactions from completing). Read performance is certainly affected as well compared to sticky sessions, and is likely to be considerably lower because of the synchronization requirements, and load on master. Consistency guarantees would be as follows: - Reading from arbitrary slaves guarantees very little - Sticky sessions guarantee read-everything-up-until-your-previous-write - Reading from master guarantees consistency re: communications over side channels (if another node, after committing, tells you that he wrote something, you can see that write, or possibly some newer write) - Taking read locks guarantees read-everything-up-until-your-previous-lock-request and also repeatable reads ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
Re: [Neo4j] Replication corner cases?
On Aug 12, 2011, at 20:40 , Tuure Laurinolli wrote: Updates will however propagate from the master to other slaves eventually so a write from one slave is not immediately visible on all other slaves. It sounds like eventual consistency from master to other slaves. if so, I am interested in finding out details about Neo4j HA member nodes voting quorum arbitrater setup (assuming using zookeeper) Looking at the code, it seems that the transaction is first prepare()'d on the slave, then the prepared log shipped to the master, applied and committed there, and the master txid shipped back and used to commit the transaction on slave. However, the locks seem to be held (both on slave and master) until the slave finishes committing or rolling back, so no visibility problems should occur. Further, the Transaction that MasterClient/MasterServer/MasterImpl creates on server-side is apparently only ever really used to hold locks. It is always rolled back (finishTransaction() in MasterImpl). This leads me to wonder if the following scenario is possible: - Slave A replicates T to master, master commits it and gets ready to return txnId, client X reads it from master, master crashes - Is it guaranteed that slave A commits the txn locally before a new master is elected (since a new master elected at this point won't have T, and thus client X would have read an update that never completed successfully Clearly slave A cannot commit the transaction, and its client gets some sort of error. Also, since master crashed, a new master will be elected and the transaction will never have existed in the new cluster. Yet client X managed to read the result, which will be wrong. Also, in this case, when master is restarted, can it rejoin the cluster, since its state has diverged from that of the cluster? ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
Re: [Neo4j] Replication corner cases?
On Aug 15, 2011, at 18:18 , Tuure Laurinolli wrote: On Aug 12, 2011, at 20:40 , Tuure Laurinolli wrote: Updates will however propagate from the master to other slaves eventually so a write from one slave is not immediately visible on all other slaves. It sounds like eventual consistency from master to other slaves. if so, I am interested in finding out details about Neo4j HA member nodes voting quorum arbitrater setup (assuming using zookeeper) Looking at the code, it seems that the transaction is first prepare()'d on the slave, then the prepared log shipped to the master, applied and committed there, and the master txid shipped back and used to commit the transaction on slave. However, the locks seem to be held (both on slave and master) until the slave finishes committing or rolling back, so no visibility problems should occur. Further, the Transaction that MasterClient/MasterServer/MasterImpl creates on server-side is apparently only ever really used to hold locks. It is always rolled back (finishTransaction() in MasterImpl). This leads me to wonder if the following scenario is possible: - Slave A replicates T to master, master commits it and gets ready to return txnId, client X reads it from master, master crashes - Is it guaranteed that slave A commits the txn locally before a new master is elected (since a new master elected at this point won't have T, and thus client X would have read an update that never completed successfully Clearly slave A cannot commit the transaction, and its client gets some sort of error. Also, since master crashed, a new master will be elected and the transaction will never have existed in the new cluster. Yet client X managed to read the result, which will be wrong. Also, in this case, when master is restarted, can it rejoin the cluster, since its state has diverged from that of the cluster? Hm, actually client X can't read anything touched by T from master, since slave A will have taken write locks on things it modifies, and the write locks are associated with T that never finishes in this example. Still, master's state will diverge from cluster state. ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
[Neo4j] Replication corner cases?
Hello, I read through the HA/replication documentation at http://docs.neo4j.org/chunked/stable/ha.html but a few question about possible failure modes remains: Can a HA transaction fail after it's committed on master? Consider the following: client C1 commits transaction T through slave S1, which propagates it to master M, which commits it. What happens to T if S1 crashes now? How does client C1 see this? If T is rolled back on M upon fialure of S1, can client C2 read the result of T from M before S1 has committed T? ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user