So for upgrading Paxos to v2, the non-serial consistency level should be set to
ANY or LOCAL_QUORUM, and the serial consistency level should still be SERIAL or
LOCAL_SERIAL. Got it, thanks!
From: Laxmikant Upadhyay
Date: Tuesday, 12 March 2024 at 7:33 am
To: user@cassandra.apache.org
Cc: Weng
You need to set both in case of lwt. your regular non -serial consistency
level will only applied during commit phase of lwt.
On Wed, 6 Mar, 2024, 03:30 Weng, Justin via user,
wrote:
> Hi Cassandra Community,
>
>
>
> I’ve been investigating Cassandra Paxos v2 (as implemented in
commit consistency level for LWT after upgrading
Paxos.
In
cqlsh<https://docs.datastax.com/en/cql-oss/3.3/cql/cql_reference/cqlshSerialConsistency.html>,
gocql<https://github.com/gocql/gocql/blob/master/session.go#L1247> and Python
driver<https://docs.datastax.com/en/developer/p
My experience to debug this kind of issue is to turn on trace. The nice
thing in cassandra is:
you can turn on trace only on 1 node and with a small percentage, i.e.
nodetool settraceprobability 0.05 --- only run on 1 node.
Hope it helps.
Regards,
James
On Thu, Jul 21, 2022 at 2:50 PM Tolbert
I'd bet the JIRA that Paul is pointing to is likely what's happening
here. I'd look for read repair errors in your system logs or in your
metrics (if you have easy access to them).
There are operations that can happen during the course of a query
being executed that may happen at different CLs,
see if that ticket applies to your
experience.
Thanks
Paul Chandler
> On 21 Jul 2022, at 15:12, pwozniak wrote:
>
> Yes, I did it. Nothing like this in my code. Consistency level is set only in
> one place (shown below).
>
>
>
> On 7/21/22 4:08 PM, manish khandelw
Yes, I did it. Nothing like this in my code. Consistency level is set
only in one place (shown below).
On 7/21/22 4:08 PM, manish khandelwal wrote:
Consistency can also be set on a statement basis. So please check in
your code that you might be setting consistency 'ALL' for some que
It doesn't make any sense to see consistency level ALL if the code is
not explicitly using it. My best guess is somewhere in the code the
consistency level was overridden.
On 21/07/2022 14:52, pwozniak wrote:
Hi,
we have the following code (java driver):
cluster =Cluster.bu
)
> .withCredentials(userName, password).build();
> session = cluster.connect(keyspaceName);
>
>
> where ConsistencyLevel.QUORUM is our default consistency level. But we
> keep receiving the following exceptions:
>
>
> com.datastax.driver.core.exceptions.ReadTimeou
))
.withTimestampGenerator(new AtomicMonotonicTimestampGenerator())
.withCredentials(userName, password).build();
session =cluster.connect(keyspaceName);
where ConsistencyLevel.QUORUM is our default consistency level. But we
keep receiving the following exceptions
This is how getConsistencyLevel method is implemented. This method returns
consistencylevel of the query or null if no consistency level has been set
using setConsistencyLevel.
Regards
Manish
On Fri, Jun 12, 2020 at 3:43 PM Manu Chadha wrote:
> Hi
>
> In my Cassandra Java driver c
Hi
In my Cassandra Java driver code, I am creating a query and then I print the
consistency level of the query
val whereClause = whereConditions(tablename, id);
cassandraRepositoryLogger.trace("getRowsByPartitionKeyId: looking in table
"+tablename+" wit
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote:
>
> On the 2nd question, would you like to tell me how to change a
> write's and a read's consistency level separately in cqlsh?
>
Not that I know of special syntax for that, but you may add an explicit
"CONSIST
n Jconsole latter.
On the 2nd question, would you like to tell me how to change a write's
and a read's consistency level separately in cqlsh?
Otherwise, how the document's R+W>Replicator to realize to guarantee a strong
consistency write and read?
Thank you!
Si
m a quorum of nodes and detect that the Paxos phase is
> underway and... maybe wait until it is over before responding with the
> latest data? The Paxos phase happens between a quorum so basically even
> though the consistency level is ONE (or indeed ANY as the Python docs
> state), doing a
a?
> The Paxos phase happens between a quorum so basically even though the
> consistency level is ONE (or indeed ANY as the Python docs state), doing a
> read with SERIAL implies that the write actually took place at a consistency
> level equivalent to QUORUM.
>
> Here als
before responding with the
latest data? The Paxos phase happens between a quorum so basically even
though the consistency level is ONE (or indeed ANY as the Python docs
state), doing a read with SERIAL implies that the write actually took place
at a consistency level equivalent to QUORUM.
Here also
erlap in that case.
It would be great if anyone can clarify this.
Thanks,
Hiro
On Thu, May 23, 2019 at 3:53 PM Craig Pastro wrote:
>
> Hello!
>
> I am trying to understand the consistency level (not serial consistency)
> required for LWTs. Basically what I am trying to understand i
Hello!
I am trying to understand the consistency level (not serial consistency)
required for LWTs. Basically what I am trying to understand is that if a
consistency level of ONE is enough for a LWT write operation if I do my
read with a consistency level of SERIAL?
It would seem so based on what
Short answer is no, because missing consistency isn’t an error and there’s no
way to know you’ve missed data without reading at ALL, and if it were ok to
read at ALL you’d already be doing it (it’s not ok for most apps).
> On May 7, 2019, at 8:05 AM, Fd Habash wrote:
>
> Typically, when a rea
Typically, when a read is submitted to C*, it may complete with …
1. No errors & returns expected data
2. Errors out with UnavailableException
3. No error & returns zero rows on first attempt, but returned on subsequent
runs.
The third scenario happens as a result of cluster entropy specially d
batch is really needed
>> for the statements. Cassandra batches are for atomicity – not speed.
>>
>>
>>
>> Sean Durity
>>
>> Staff Systems Engineer – Cassandra
>>
>> MTC 2250
>>
>> #cassandra - for the latest news and updates
>>
>&g
gineer – Cassandra
>
> MTC 2250
>
> #cassandra - for the latest news and updates
>
>
>
>
>
> *From:* Mahesh Daksha
> *Sent:* Thursday, April 11, 2019 5:21 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Getting Consistency level TWO w
: [EXTERNAL] Re: Getting Consistency level TWO when it is requested
LOCAL_ONE
Hi Jean,
I want to understand how you are setting the write consistency level as LOCAL
ONE. That is with every query you mentioning consistency level or you have set
the spring cassandra config with provided
Hi Jean,
I want to understand how you are setting the write consistency level as
LOCAL ONE. That is with every query you mentioning consistency level or you
have set the spring cassandra config with provided consistency level.
Like this:
cluster.setQueryOptions(new
QueryOptions
Hello everyone,
I have a case where the developers are using spring data framework for
Cassandra. We are writing batches setting consistency level at LOCAL_ONE
but we got a timeout like this
*Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
Cassandra timeout during BATCH_LOG
SCRIBE KEYSPACE system_auth;"
Or to check them all: cqlsh -e "DESCRIBE KEYSPACES;"
I don't know what's wrong exactly, but your application is truncating with
a consistency level of 'ALL', meaning all the replicas must be up for your
application to work.
Le mer. 19 sept. 2
*To:* user@cassandra.apache.org
*Subject:* Error during truncate: Cannot achieve consistency level ALL ,
how to fix it
Hi All,
I am new to Cassandra. Following below link
https://grokonez.com/spring-framework/spring-data/start-spring-data-cassandra-springboot#III_Sourcec
What RF is your system_auth keyspace?
If its one, match it to the user keyspace, and restart the node.
From: sha p [mailto:shatestt...@gmail.com]
Sent: 19 September 2018 11:49
To: user@cassandra.apache.org
Subject: Error during truncate: Cannot achieve consistency level ALL , how to
fix it
Hi
t with RF = 2 , but when I run
>>> this application from above source code bellow error is thrown
>>> Caused by: com.datastax.driver.core.exceptions.TruncateException: Error
>>> during truncate: Cannot achieve consistency level ALL """
>>>
>>>
>>> What wrong i am doing here ..How to fix it ? Plz help me.
>>>
>>> Regards,
>>> Shyam
>>>
>>
Hello,
What is the consistency level used when performing COPY command using CQL
interface?
don't see anything in the documents
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/copy_r.html
I am setting CONSISTENCY LEVEL at the cql level and then running a copy
command, does that
It's best-practice to disable the default user ("cassandra" user) after
enabling password authentication on your cluster. The default user reads
with a CL.QUORUM when authenticating, while other users use CL.LOCAL_ONE.
This means it's more likely you could experience authentication issues,
even if
On Thu, Jul 6, 2017 at 6:58 PM, Charulata Sharma (charshar) <
chars...@cisco.com> wrote:
> Hi,
>
> I am facing similar issues with SYSTEM_AUTH keyspace and wanted to know
> the implication of disabling the "*cassandra*" superuser.
>
Unless you have scheduled any tasks that require the user with t
4, 2017 at 2:16 AM
To: Oleksandr Shulgin
mailto:oleksandr.shul...@zalando.de>>
Cc: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Cannot achieve consistency level LOCAL_ONE
Thanks for the detail explanation
On 2017-06-15 19:10 (-0700), srinivasarao daruna
wrote:
> Hi,
>
> Recently one of our spark job had missed cassandra consistency property and
> number of concurrent writes property.
Just for the record, you still have a consistency level set, it's just set to
whatever
tables, for which have always had consistency
level proper. We started repair, but due to the volume of data, repair
might take a day or two to complete. Mean while, wanted to get some inputs.
As the error planted lot of questions.
1) Is there a relation between mutation fails to read time outs and
Thanks for the detail explanation. You did solve my problem.
Cheers,
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 17:09
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com
wrote:
Thanks for the
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com wrote:
> Thanks for the reply.
> My system_auth settings is as below and what should I do with it? And I'm
> interested why the newly added node is responsible for the user
> authentication?
>
> CREATE KEYSPACE system_auth WITH replication =
replication_factor': '1'} AND durable_writes = true;
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 16:36
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 9:11 AM, wxn...@zjqunshuo.com
wrote:
Hi,
Cluster set up:
1 DC with 5
During the down
> period, all 4 other nodes report "Cannot achieve consistency
> level LOCAL_ONE" constantly until I brought up the dead node. My data
> seems lost during that down time. To me this could not happen because the
> write CL is LOCAL_ONE and only one node was dea
Hi,
Cluster set up:
1 DC with 5 nodes (each node having 700GB data)
1 kespace with RF of 2
write CL is LOCAL_ONE
read CL is LOCAL_QUORUM
One node was down for about 1 hour because of OOM issue. During the down
period, all 4 other nodes report "Cannot achieve consistency level LOCA
Short of actually making ConsistencyLevel pluggable or adding/changing one
of the existing levels, an alternative approach would be to divide up the
cluster into either real or pseudo-datacenters (with RF=2 in each DC), and
then write with QUORUM (which would be 3 nodes, across any combination of
d
Firstly, this situation only occurs if you need strong consistency and are
using an even replication factor (RF4, RF6, etc).
Secondly, either the read or write still need to be performed at a minimum
level of QUORUM. This means there are no extra availability benefits from
your proposal (i.e. a min
Would love to see real pluggable consistency levels. Sorta sad it got
wont-fixed - may be time to revisit that, perhaps it's more feasible now.
https://issues.apache.org/jira/browse/CASSANDRA-8119 is also semi-related,
but a different approach (CL-as-UDF)
On Thu, Jun 8, 2017 at 9:26 PM, Brandon W
I don't disagree with you there and have never liked TWO/THREE. This is
somewhat relevant: https://issues.apache.org/jira/browse/CASSANDRA-2338
I don't think going to CL.FOUR, etc, is a good long-term solution, but I'm
also not sure what is.
On Thu, Jun 8, 2017 at 11:20 PM, Dikang Gu wrote:
>
To me, CL.TWO and CL.THREE are more like work around of the problem, for
example, they do not work if the number of replicas go to 8, which does
possible in our environment (2 replicas in each of 4 DCs).
What people want from quorum is strong consistency guarantee, as long as
R+W > N, there are th
> We have CL.TWO.
>
>
>
This was actually the original motivation for CL.TWO and CL.THREE if memory
serves:
https://issues.apache.org/jira/browse/CASSANDRA-2013
We have CL.TWO.
On Thu, Jun 8, 2017 at 10:03 PM, Dikang Gu wrote:
> So, for the quorum, what we really want is that there is one overlap among
> the nodes in write path and read path. It actually was my assumption for a
> long time that we need (N/2 + 1) for write and just need (N/2) for read,
>
>
>
> So, for the quorum, what we really want is that there is one overlap among
>> the nodes in write path and read path. It actually was my assumption for a
>> long time that we need (N/2 + 1) for write and just need (N/2) for read,
>> because it's enough to provide the strong consistency.
>>
>
>
> So, for the quorum, what we really want is that there is one overlap among
> the nodes in write path and read path. It actually was my assumption for a
> long time that we need (N/2 + 1) for write and just need (N/2) for read,
> because it's enough to provide the strong consistency.
>
You are wr
So, for the quorum, what we really want is that there is one overlap among
the nodes in write path and read path. It actually was my assumption for a
long time that we need (N/2 + 1) for write and just need (N/2) for read,
because it's enough to provide the strong consistency.
On Thu, Jun 8, 2017
It would be a little weird to change the definition of QUORUM, which means
majority, to mean something other than majority for a single use case.
Sounds like you want to introduce a new CL, HALF.
On Thu, Jun 8, 2017 at 7:43 PM Dikang Gu wrote:
> Justin, what I suggest is that for QUORUM consisten
Justin, what I suggest is that for QUORUM consistent level, the block for
write should be (num_replica/2)+1, this is same as today, but for read
request, we just need to access (num_replica/2) nodes, which should provide
enough strong consistency.
Dikang.
On Thu, Jun 8, 2017 at 7:38 PM, Justin Ca
2/4 for write and 2/4 for read would not be sufficient to achieve strong
consistency, as there is no overlap.
In your particular case you could potentially use QUORUM for write and TWO
for read (or vice-versa) and still achieve strong consistency. If you add
additional nodes in the future this wou
Hello there,
We have some use cases are doing consistent read/write requests, and we
have 4 replicas in that cluster, according to our setup.
What's interesting to me is that, for both read and write quorum requests,
they are blocked for 4/2+1 = 3 replicas, so we are accessing 3 (for write)
+ 3 (
Thanks for the perspective Ben, it's food for thought.
At minimum, it seems like the documentation should be updated to mention that
the retry policy will not be consulted when using a local consistency level but
with no local nodes available. That way, people won't be surprised
ndra.apache.org"
Subject: Consistency Level vs. Retry Policy when no local nodes are
available
I am running DSE 5.0, and I have a Java client using the Datastax 3.0.0
client library.
The client is configured to use a DCAwareRoundRobinPolicy wrapped in a
TokenAwarePolicy. Nothing special.
When I run
assandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Consistency Level vs. Retry Policy when no local nodes are available
I am running DSE 5.0, and I have a Java client using the Datastax 3.0.0 client
library.
The client is configur
If reading from materialized view with a consistency level of quorum am I
guaranteed to have the most recent view? other words is w + r > n contract
maintained for MV's as well for both reads and writes?
Thanks!
ed to the X nodes in the remote DC. But it will only be
> used to indeed do a local operation as a fallback if the operation is not
> using a LOCAL_* consistency level.
>
> Sorry I have been so long answering you.
>
> ---
> Alain Rodriguez - al...@the
nodes in the remote DC. But it will only be
used to indeed do a local operation as a fallback if the operation is not
using a LOCAL_* consistency level.
Sorry I have been so long answering you.
---
Alain Rodriguez - al...@thelastpickle.com
France
The Last Pickle - Apache
Hello,
Using withLocalDC="myLocalDC" and withUsedHostsPerRemoteDc>0 will guarantee
that you will connect to one of the nodes in "myLocalDC",
but DOES NOT guarantee that your read/write request will be acknowledged by
a "myLocalDC" node. It may well be acknowledged by a remote DC node as
well, eve
I was wondering if there are users in this list using consistency level ALL
and their reasons for doing so?
For example, would the errors for deleting a financial transaction due to
an error be reason enough to use consistency level of ALL? Are there other
strategies people would use to avoid
On Thu, Mar 31, 2016 at 4:35 AM, Alain RODRIGUEZ wrote:
> My understanding is using RF 3 and LOCAL_QUORUM for both reads and writes
> will provide a strong consistency and a high availability. One node can go
> down and also without lowering the consistency. Or RF = 5, Quorum = 3,
> allowing 2 no
> offers:
>
> http://docs.datastax.com/en/cassandra/3.x/cassandra/dml/dmlConfigConsistency.html
> .
>
> In short, Cassandra does indeed guarantee the degree of immediate
> consistency that you specify (and presumably want.)
>
>
> -- Jack Krupansky
>
> On Sun, Mar 27, 2016 at 6:36 PM,
com/en/cassandra/3.x/cassandra/dml/dmlConfigConsistency.html
.
In short, Cassandra does indeed guarantee the degree of immediate
consistency that you specify (and presumably want.)
-- Jack Krupansky
On Sun, Mar 27, 2016 at 6:36 PM, Harikrishnan A wrote:
> Hello,
>
> I have a question re
Hello,
I have a question regarding consistency Level settings in a multi Data Center
Environment. What is the preferred CL settings in this scenario for an
immediate consistency , QUORUM or LOCAL_QUORUM ?
If the replication Factor is set to 3 each ( 2 Data Centers) , the QUORUM (
writes/read
ny CL except SERIAL/LOCAL_SERIAL
>
> Setting the consistency level for Paxos is useful in the context of multi
> data centers only. SERIAL => require a majority wrt RF in all DCs.
> LOCAL_SERIAL => majority wrt RF in local DC only
>
> Hope that helps
>
>
>
>
mutation itself. In this case you can use
any CL except SERIAL/LOCAL_SERIAL
Setting the consistency level for Paxos is useful in the context of multi
data centers only. SERIAL => require a majority wrt RF in all DCs.
LOCAL_SERIAL => majority wrt RF in local DC only
Hope that helps
On Thu,
ncy_level
>>
>> On Thu, Jan 7, 2016 at 3:44 AM, Hiroyuki Yamada
>> wrote:
>>
>>> Hi,
>>>
>>> I've been doing some POCs of lightweight transactions and
>>> I come up with some questions, so please let me ask them to you here.
>>
POCs of lightweight transactions and
>> I come up with some questions, so please let me ask them to you here.
>>
>> So the question is:
>> what consistency level should I set when using IF NOT EXIST or UPDATE IF
>> statements ?
>>
>> I used the st
ation, which was causing the Cassandra-Java-Driver to insert "If Not
> Exists" in the insert query, thus invoking SERIAL consistency-level.
>
> We removed the annotation (didn't really need that), and we have not
> observed the error since about an hour or so.
>
>
>
Hi All.
I think we got the root-cause.
One of the fields in one of the class was marked with "@Version"
annotation, which was causing the Cassandra-Java-Driver to insert "If Not
Exists" in the insert query, thus invoking SERIAL consistency-level.
We removed the annotation
_transaction_c.html
>>>
>>> On Mon, Nov 2, 2015 at 1:29 AM Ajay Garg wrote:
>>>
>>>> Hi All.
>>>>
>>>> I have a 2*2 Network-Topology Replication setup, and I run my
>>>> application via DataStax-driver.
>>>>
&g
and I run my
>>> application via DataStax-driver.
>>>
>>> I frequently get the errors of type ::
>>> *Cassandra timeout during write query at consistency SERIAL (3 replica
>>> were required but only 0 acknowledged the write)*
>>>
>>> I have al
r.
>>
>> I frequently get the errors of type ::
>> *Cassandra timeout during write query at consistency SERIAL (3 replica
>> were required but only 0 acknowledged the write)*
>>
>> I have already tried passing a "write-options with LOCAL_QUORUM
>> c
type ::
> *Cassandra timeout during write query at consistency SERIAL (3 replica
> were required but only 0 acknowledged the write)*
>
> I have already tried passing a "write-options with LOCAL_QUORUM
> consistency-level" in all create/save statements, but I still get this
&
"write-options with LOCAL_QUORUM
consistency-level" in all create/save statements, but I still get this
error.
Does something else need to be changed in /etc/cassandra/cassandra.yaml too?
Or may be some another place?
--
Regards,
Ajay
Hi,
I’m not sure how consistency level is applied on batch statement. I didn’t
found detailed information on datastax.com (1)
<http://docs.datastax.com/en/cql/3.0/cql/cql_reference/batch_r.html>
regarding that.
- It is possible to set a CL on individual statements.
- It is possible
I take care it? Or how Cassandra guarantee it?
Regards,
Peter
发件人: daemeon reiydelle [mailto:daeme...@gmail.com]
发送时间: 2015年3月17日 15:04
收件人: user@cassandra.apache.org
抄送: Saladi Naidu
主题: Re: Is Table created in all the nodes if the default consistency level used
Oops, my bad. Not "master no
data node? Because master node is down, table can not be created on data
> nodes.
>
>
>
> Regards,
>
> Peter
>
>
>
>
>
>
>
> *发件人:* daemeon reiydelle [mailto:daeme...@gmail.com]
> *发送时间:* 2015年3月17日 13:38
> *收件人:* user@cassandra.apache.org; Saladi Naidu
>
: daemeon reiydelle [mailto:daeme...@gmail.com]
发送时间: 2015年3月17日 13:38
收件人: user@cassandra.apache.org; Saladi Naidu
主题: Re: Is Table created in all the nodes if the default consistency level used
If I am following your thread correctly, I think you might be confusing the
"creeation" of a tabl
go "someplace", the memtables and sst's start tracking that data.
Does this clarify?
As an aside, you will only get stale data if the read consistency level
together with the number of nodes that went offline (which hold copies of
the table), allow it: e.g. imagine read consistency 1
less
than number of nodes, you will face AUTH issues. Naidu Saladi
From: 鄢来琼
To: "user@cassandra.apache.org"
Sent: Monday, March 16, 2015 2:13 AM
Subject: Re: Is Table created in all the nodes if the default consistency
level used
#yiv5346526530 #yiv5346526530 -- _fil
nt to guarantee table is created in all the nodes.
>
>
>
> Peter
>
>
>
> *发件人:* 鄢来琼
> *发送时间:* 2015年3月16日 15:14
> *收件人:* user@cassandra.apache.org
> *主题:* Re: Is Table created in all the nodes if the default consistency
> level used
>
>
>
> Hi Daemeon,
: Is Table created in all the nodes if the default consistency level used
Hi Daemeon,
Yes, I use “NetworkTopologyStrategy” strategy for “Table_test”,
but “System keyspace” is Cassandra internal keyspace, its strategy is
localStrategy.
So my question is how to guarantee “Table_test” is created in all
reiydelle [mailto:daeme...@gmail.com]
发送时间: 2015年3月16日 14:35
收件人: user@cassandra.apache.org
主题: Re: Is Table created in all the nodes if the default consistency level used
If you want to guarantee that the data is written to all nodes before the code
returns, then yes you have to use "consis
If you want to guarantee that the data is written to all nodes before the
code returns, then yes you have to use "consistency all". Otherwise there
is a small risk of outdated data being served if a node goes offline longer
than hints timeouts.
Somewhat looser options that can assure multiple copi
Could you tell me whether the meta data of the new table are build in all the
nodes after execute the following statement.
cassandra_session.execute_async(
“““CREATE TABLE Table_test(
ID uuid,
Time timestamp,
Value double,
Date timestamp,
PRIMARY KEY ((ID,Date)
L
>> is returned?
>>
>> On Fri, Jan 30, 2015 at 2:28 PM, Jan wrote:
>>
>>> HI Michal;
>>>
>>> The consistency level defaults to ONE for all write and read operations.
>>> However consistency level is also set for the keyspace.
>>>
&g
wrote:
> Hi Jan,
>
> I'm using only one keyspace. Even if it defaults to ONE why sometimes ALL
> is returned?
>
> On Fri, Jan 30, 2015 at 2:28 PM, Jan wrote:
>
>> HI Michal;
>>
>> The consistency level defaults to ONE for all write and read operations.
Hi Jan,
I'm using only one keyspace. Even if it defaults to ONE why sometimes ALL
is returned?
On Fri, Jan 30, 2015 at 2:28 PM, Jan wrote:
> HI Michal;
>
> The consistency level defaults to ONE for all write and read operations.
> However consistency level is also set
HI Michal;
The consistency level defaults to ONE for all write and read operations.
However consistency level is also set for the keyspace.
Could it be possible that your queries are spanning multiple keyspaces which
bear different levels of consistency ?
cheersJan
C* Architect
On
Hi,
We're using C* 2.1.2, django-cassandra-engine which in turn uses cqlengine.
LOCAL_QUROUM is set as default consistency level. From time to time we get
timeouts while talking to the database but what is strange returned
consistency level is not LOCAL_QUROUM:
code=1200 [Coordinator node
tacenter and we want to set the default consistency
>> level to LOCAL_ONE instead of ONE but we don't know how to configure it.
>> We set LOCAL_QUORUM via cql driver for the desired queries but we won't
>> do the same for the default one.
>>
>> Thanks
Cassandra itself does not have default consistency levels. These are only
configured in the driver.
On Fri, Nov 14, 2014 at 8:54 AM, Adil wrote:
> Hi,
> We are using two datacenter and we want to set the default consistency
> level to LOCAL_ONE instead of ONE but we don't know ho
Hi,
We are using two datacenter and we want to set the default consistency
level to LOCAL_ONE instead of ONE but we don't know how to configure it.
We set LOCAL_QUORUM via cql driver for the desired queries but we won't do
the same for the default one.
Thanks in advance
Adil
A follow up on the earlier question.
I meant to ask earlier if control returns to client after batch log is
written on coordinator irrespective of consistency level mentioned.
Also: will the coordinator attempt all statements one after the other, or
in parallel ?
Thanks
On Tue, Sep 16, 2014
Is consistency level honored for batch statements?
If I have 100 insert/update statements in my batch and use LOCAL_QUORUM
consistency, will the control from coordinator return only after a local
quorum update has been done for all the 100 statements?
Or is it different ?
Thanks
Vish
What is recommended read/write consistency level (CL) for counters?
Yes I know that write_CL + read_CL > RF is recommended.
But, I got strange results when run my junit tests with different CLs
against 3 nodes cluster.
I checked 9 combinations: (write=ONE,QUORUM,ALL) x (read=ONE,QUORUM,
1 - 100 of 275 matches
Mail list logo