The condition you bring up is a misconfigured cluster period, and no matter
how you look at it, that's the case. In other words, the scenario you're
bringing up does not get to the heart of the matter of Cassandra having
"Strong Consistency" or not, your example I'm sorry to say fails in this
regard.

However, lets get at what I believe you're attempting to talk about in
reality IE race condition protection when you desire a set order, this by
definition is the type of guarantee provided by linearizability. So without
SERIAL or LOCAL_SERIAL consistency when using a data model that depends on
_order_ (which your example does) you're going to be unhappy, ALL or ONE
consistency levels do nothing to address your example with or without clock
skew.

In theory the last timestamp of a given table could probably be satisfied
well enough for most problem domains by just keeping the servers pointing
to the same ntp server, in practice this is a very rare valid use case as
clusters doing several hundred thousand transactions per second (not
uncommon) would find that "last timestamp" is hopelessly wrong every time
to at best be an approximation, no matter the database technology.



On Mon, Sep 7, 2015 at 6:20 AM, ibrahim El-sanosi <ibrahimsaba...@gmail.com>
wrote:

> ""It you need strong consistency and don't mind lower transaction rate,
> you're better off with base""
> I wish you can explain more how this statment relate to the my post?
> Regards,
>



-- 

Thanks,
Ryan Svihla

Reply via email to