On 6/25/2011 8:24 AM, Edward Capriolo wrote:
I played around with the bakery algorithm and had ok success the
challenges are most implementations assume an n size array of fixed
clients and when you get it working it turns out to be a good number
of cassandra ops to acquire your bakery lock.


I was thinking rather than making certain that there is a column reserved for each node and having to keep it updated, you can just over allocate a large number that would always be enough, like 100. A slice of 100 byte-sized values shouldn't be a significant perf hit vs 3 or 4. If you only have 3 nodes in your cluster and the last 97 go unused, that would be ok; it would be as if those non-existent "customers" never take a number.

For optimizing for C*, I think you can get away with minimal getSlices for the loops. If you're lucky, you can fall through all of them using the results from only 1 getSlice. Only if a process is "entering" or has a higher priority number will you need to wait and then do another getSlice and only a slice for the remaining columns. I think my logic is correct; do you agree?

Did you have other problems other than performance?

Reply via email to