Hi,

Adding the java-driver ML for further question, because this does look like
a bug

Are you able to reproduce it a clean environnement using the same C*
version and driver version ?


On 27 February 2018 at 10:05, Abhishek Kumar Maheshwari <
[email protected]> wrote:

> Hi Alex,
>
> i have only One DC (with name DC1) and have only one keyspace. So i dont
> think so both of the scenario is possible. (yes in my case QUORUM is  
> equivalent
> of ALL)
>
> cqlsh> SELECT * FROM system_schema.keyspaces  where keyspace_name='adlog' ;
>
>  keyspace_name | durable_writes | replication
> ---------------+----------------+---------------------------
> ----------------------------------------------------
>          adlog |           True | {'DC1': '2', 'class':
> 'org.apache.cassandra.locator.NetworkTopologyStrategy'}
>
>
> On Tue, Feb 27, 2018 at 2:27 PM, Oleksandr Shulgin <
> [email protected]> wrote:
>
>> On Tue, Feb 27, 2018 at 9:45 AM, Abhishek Kumar Maheshwari <
>> [email protected]> wrote:
>>
>>>
>>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12
>>> Servers )With below definition:
>>>
>>> {'DC1': '2', 'class': 'org.apache.cassandra.locator.
>>> NetworkTopologyStrategy'}
>>>
>>> Some time i am getting below exception
>>>
>>> [snip]
>>
>>> Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
>>> Cassandra timeout during write query at consistency QUORUM (3 replica were
>>> required but only 2 acknowledged the write)
>>>         at com.datastax.driver.core.exceptions.WriteTimeoutException.co
>>> py(WriteTimeoutException.java:100)
>>>         at com.datastax.driver.core.Responses$Error.asException(Respons
>>> es.java:134)
>>>         at com.datastax.driver.core.RequestHandler$SpeculativeExecution
>>> .onSet(RequestHandler.java:525)
>>>         at com.datastax.driver.core.Connection$Dispatcher.channelRead0(
>>> Connection.java:1077)
>>>
>>> why its waiting for acknowledged from 3rd server as replication factor
>>> is 2?
>>>
>>
>> I see two possibilities:
>>
>> 1) The data in this keyspace is replicated to another DC, so there is
>> also 'DC2': '2', for example, but you didn't show it.  In this case QUORUM
>> requires more than 2 nodes.
>> 2) The write was targeting a table in a different keyspace than you think.
>>
>> In any case QUORUM (or LOCAL_QUORUM) with RF=2 is equivalent of ALL.  Not
>> sure why would you use it in the first place.
>>
>> For consistency levels involving quorum you want to go with RF=3 in a
>> single DC.  For multi DC you should think if you want QUORUM or EACH_QUORUM
>> for your writes and figure out the RFs from that.
>>
>> Cheers,
>> --
>> Alex
>>
>>
>
>
> --
>
> *Thanks & Regards,*
> *Abhishek Kumar Maheshwari*
> *+91- 9999805591 <+91%2099998%2005591> (Mobile)*
>
> Times Internet Ltd. | A Times of India Group Company
>
> FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
>
> *P** Please do not print this email unless it is absolutely necessary.
> Spread environmental awareness.*
>

Reply via email to