Try changing the chunk length parameter on the compression settings to 4kb,
and reduce read ahead to 16kb if you’re using EBS or 4KB if you’re using
decent local ssd or nvme.
Counters read before write.
—
Jon Haddad
Rustyrazorblade Consulting
rustyrazorblade.com
On Fri, Apr 5, 2024 at 9:27 AM
follow up question on performance issue with 'counter writes'- is there a
parameter or condition that limits the allocation rate for
'CounterMutationStage'? I see 13-18mb/s for 4.1.4 Vs 20-25mb/s for 4.0.5.
The back-end infra is same for both the clusters and same test cases/data model.
On
Hi,
Unfortunately, the numbers you're posting have no meaning without context.
The speculative retries could be the cause of a problem, or you could
simply be executing enough queries and you have a fairly high variance in
latency which triggers them often. It's unclear how many queries / second
Hi All,
On debugging the cluster for performance dip seen while using 4.1.4, i
found high speculation retries Value in nodetool tablestats during read
operation.
I ran the below tablestats command and checked its output after every few
secs and noticed that retries are on rising side. Also
we are seeing similar perf issues with counter writes - to reproduce:
cassandra-stress counter_write n=10 no-warmup cl=LOCAL_QUORUM -rate
threads=50 -mode native cql3 user= password= -name
op rate: 39,260 ops (4.1) and 63,689 ops (4.0)
latency 99th percentile: 7.7ms (4.1) and 1.8ms
Hi All,
Was going through this mail chain
(https://www.mail-archive.com/user@cassandra.apache.org/msg63564.html)
and was wondering that if this could cause a performance degradation in
4.1 without changing compactionThroughput.
As seeing performance dip in Read/Write after upgrading from 4.0 to
We are a about to do the same upgrade, although aiming v4.1.2
Highly interested in this topic as well.
Luciano Greiner
On Thu, Jan 11, 2024 at 4:13 AM ranju goel wrote:
>
> Hi Everyone,
>
> We are planning to upgrade from 4.0.11 to 4.1.3, the main motive of upgrading
> is 4.0.11 going EOS in
Hi Everyone,
We are planning to upgrade from 4.0.11 to 4.1.3, the main motive of
upgrading is 4.0.11 going EOS in July 2024.
On analyzing JIRAs, found an Open ticket, CASSANDRA-18766 (*high
speculative retries on v4.1.3*) which talks about Performance Degradation
and no activity seen since
Hi Arjun,
this is strange. You should be able to use a range query on a column that is
part of the clustering key, as long as all columns in the clustering key left
to this column are set to fixed values.
So, given the table definition that you specified, your query should work (I
just
is that queries will be based on the date_id. For
example:
SELECT VOICE_AMT, SMS_AMT, DATA_AMT FROM instant_cdr WHERE rowid = 0 and
msisdn = '801000' and date_id >= 231202 AND date_id <= 231204;
The issue is that I can only apply >= and <= on the id column, which is not
used
Thanks , it helped but also looking for a way to get total number of token
ranges assigned to that node, which i am doing currently manually(
subtracting) by using nodetool ring.
Best Regards
Ranju
On Fri, Jun 9, 2023 at 12:50 PM guo Maxwell wrote:
> I think nodetool info with --token may do
I think nodetool info with --token may do some help.
ranju goel 于2023年6月9日周五 15:09写道:
> Hi everyone,
>
> Is there any faster way to calculate the number of token ranges allocated
> to a node
> (x.y.z.w)?
>
> I used the manual way by subtracting the last token with the start token
> shown in the
Hi everyone,
Is there any faster way to calculate the number of token ranges allocated
to a node
(x.y.z.w)?
I used the manual way by subtracting the last token with the start token
shown in the nodetool ring, but it is time consuming.
x.y.z.w RAC1 UpNormal 88 GiB
are getting below error
from the driver.
The request queue is full
This problem gets solved if we increased the IO Thread to 10.
We have some query regarding this issue.
1. We want to understand the relation between IO Thread and queue and
its queue size.
2. It would be great if we
Hi Deepti
I think you can reach out to
https://groups.google.com/a/lists.datastax.com/g/cpp-driver-user.
Regards
Manish
On Fri, Dec 23, 2022 at 12:52 PM Deepti Sharma S via user <
user@cassandra.apache.org> wrote:
> Hello Team,
>
>
>
> Could you please help in
Hello Team,
Could you please help in answering below query.
Regards,
Deepti Sharma
PMP(r) & ITIL
From: Deepti Sharma S via user
Sent: 20 December 2022 18:39
To: user@cassandra.apache.org
Cc: Nandita Singh S
Subject: Query for Cassandra Driver
Hello Team,
We have an Application followi
Hello Team,
We have an Application following C++98 standard, compiled with gcc version
7.5.0 on SUSE Linux.
We are currently using DataStax C/C++ Driver(Version 2.6) and its working fine
with application(C++98).
Now We have a requirement to update DataStax C/C++ Driver to latest version
2.16.
3.11.x versions will be maintained till May July 2023. Please refer
https://cassandra.apache.org/_/download.html
On Thu, Dec 15, 2022, 20:55 Pranav Kumar (EXT) via user <
user@cassandra.apache.org> wrote:
> Hi Team,
>
>
>
> Could you please help us to know when version 3.11.13 is going to be
Hi Team,
Could you please help us to know when version 3.11.13 is going to be EOS? Till
when we are going to get fixes for the version 3.11.13.
Regards,
Pranav
.
As it has been mentioned above the root cause is
=> A client-side timeout considering that the request is too slow, server
did not respond in time.
The reasons are legions:
- The cluster can be busy (hot partitions)
- You can query more and more data which taking more and more time (large
partiti
emote client and you
>suspect network-related latency, I would start by looking at the query that
>generates the timeout and the schema of the table. Make sure that you are
>querying WITHIN a partition and not ACROSS partitions. There are plenty of
>other potential problems, but y
This is a mailing list for the Apache Cassandra, and that's not the same
as DataStax Enterprise Cassandra you are using. We may still be able to
help here if you could provide more details, such as the queries, table
schema, system stats (cpu, ram, disk io, network, and so on), logs,
table
Hi All,
My application is frequently getting timeout errors since 2 weeks now. I'm
using datastax Cassandra 4.14
Can someone help me here?
Thanks,
Shagun
disabled but actually not, when
they see cql statement, they realized.
2) app side code query immediately after write
from the trace, you have read time, get this row write time by
select writetime ("any non-key column here") from "table_name_here"
where ...;
if read time i
their code and find issue
parts. In my case, was function disabled but actually not, when they see
cql statement, they realized.
2) app side code query immediately after write
from the trace, you have read time, get this row write time by
select writetime ("any non-key column here&q
e dim. 7 août 2022, 20:26, Raphael Mazelier a écrit :
>
>> > "Read repair is in the blocking read path for the query, yep"
>>
>> OK interesting. This is not what I understood from the documentation. And
>> I use localOne level consistency.
>>
>>
e the name of this
fonctionnality (in new Cassandra release).
Hope it will help
Kind regards
Stéphane
Le dim. 7 août 2022, 20:26, Raphael Mazelier a écrit :
> "Read repair is in the blocking read path for the query, yep"
OK interesting. This is not what I understood from the
d
Thanks a lot Scott, i didn't knew this fact.
Kind regards
Stéphane
Le dim. 7 août 2022, 19:31, C. Scott Andreas a
écrit :
> > but still as I understand the documentation the read repair should not
> be in the blocking path of a query ?
>
> Read repair is in the blocking read pat
Read repair is in the blocking read path for the query, yep"
>
> OK interesting. This is not what I understood from the documentation. And
> I use localOne level consistency.
>
> I enabled tracing (see in the attachment of my first msg)/ but I didn't
> see read repair in the trace (and
> "Read repair is in the blocking read path for the query, yep"
OK interesting. This is not what I understood from the documentation.
And I use localOne level consistency.
I enabled tracing (see in the attachment of my first msg)/ but I didn't
see read repair in the trace (an
> but still as I understand the documentation the read repair should not be in the blocking path of a query ?Read repair is in the blocking read path for the query, yep. At quorum consistency levels, the read repair must complete before returning a result to the client to ensure the data retur
ld change the consistency level to LOCAL_ONE/LOCAL_QUORUM/etc. to
>> fix the problem.
>>
>> On 05/08/2022 22:54, Bowen Song wrote:
>>
>> The DCAwareRoundRobinPolicy/TokenAwareHostPolicy controlls which
>> Cassandra coordinator node the client sends queries to, not the nodes it
&g
to, not the
nodes it connects to, nor the nodes that performs the actual read.
A client sends a CQL read query to a coordinator node, and the
coordinator node parses the CQL query, and send READ requests to
other nodes in the cluster based on the consistency level.
Have you checked
to gocql (I got the pretty same result in python).
I wonder if it's related to read_repair_chance parameter and
dclocal_read_repair_chance.
but still as I understand the documentation the read repair should not
be in the blocking path of a query ?
--
Raphael Mazelier
On 05/08/2022 23:13, Jim
to read on other DC.
Btw it's not limited to gocql (I got the pretty same result in python).
I wonder if it's related to read_repair_chance parameter and
dclocal_read_repair_chance.
but still as I understand the documentation the read repair should not
be in the blocking path of a query
The DCAwareRoundRobinPolicy/TokenAwareHostPolicy controlls which
Cassandra coordinator node the client sends queries to, not the nodes it
connects to, nor the nodes that performs the actual read.
A client sends a CQL read query to a coordinator node, and the
coordinator node parses the CQL
gating some performance issue I noticed strange things in my
> experiment:
>
> What we expect is very slow latency 3/5ms max for this specific select
> query. So we want every read to be local the each datacenter.
>
> We configure DCAwareRoundRobinPolicy
-1': '2', 'us-east-1': '2'}
Investigating some performance issue I noticed strange things in my
experiment:
What we expect is very slow latency 3/5ms max for this specific select
query. So we want every read to be local the each datacenter.
We configure DCAwareRoundRobinPolicy(local_dc=DC
while the compaction is running.
The most important factor of read performance is the amount of data each
node has to scan in order to complete the read query. Large partitions,
too many tombstones, partition spread in too many sstables, etc. all
hurts the performance. You will need to find
>
>
> <https://www.facebook.com/SkylineCommunications/>
>
> <https://www.instagram.com/skyline.dataminer/>
>
>
> <https://skyline.be/skyline/awards?utm_source=signature_medium=email_campaign=icon>
>
>
>
>
>
>
>
> *From:* Bowen Song
&g
;
[cid:image010.png@01D88D2B.263669C0]
From: Bowen Song
Sent: Friday, July 1, 2022 08:48
To: user@cassandra.apache.org
Subject: Re: Query around Data Modelling -2
This message was sent from outside the company. Please do not click links or
open attachments unless you recognise the source of t
e
auto-compaction on the table and is relying on weekly scheduled
compactions? Or running weekly major compactions? Neither of these
sounds right.
On 30/06/2022 15:03, MyWorld wrote:
Hi all,
Another query around data Modelling.
We have a existing table with below
ng on weekly scheduled
> compactions? Or running weekly major compactions? Neither of these sounds
> right.
> On 30/06/2022 15:03, MyWorld wrote:
>
> Hi all,
>
> Another query around data Modelling.
>
> We have a existing table with below structure:
> Table(PK,CK, col1,col2,
06/2022 15:03, MyWorld wrote:
Hi all,
Another query around data Modelling.
We have a existing table with below structure:
Table(PK,CK, col1,col2, col3, col4,col5)
Now each Pk here have 1k - 10k Clustering keys. Each PK has size from
10MB to 80MB. We have overall 100+ millions partitions. Also we
:
> How are you running repair? -pr? Or -st/-et?
>
> 4.0 gives you real incremental repair which helps. Splitting the table
> won’t make reads faster. It will increase the potential parallelization of
> compaction.
>
> On Jun 30, 2022, at 7:04 AM, MyWorld wrote:
>
>
> Hi
How are you running repair? -pr? Or -st/-et?
4.0 gives you real incremental repair which helps. Splitting the table won’t
make reads faster. It will increase the potential parallelization of
compaction.
> On Jun 30, 2022, at 7:04 AM, MyWorld wrote:
>
>
> Hi all,
>
> An
Hi all,
Another query around data Modelling.
We have a existing table with below structure:
Table(PK,CK, col1,col2, col3, col4,col5)
Now each Pk here have 1k - 10k Clustering keys. Each PK has size from 10MB
to 80MB. We have overall 100+ millions partitions. Also we have set
levelled
e is still
> under 100 MB
>
> On Thu, Jun 23, 2022, 7:18 AM Jeff Jirsa wrote:
>
>> How many rows per partition in each model?
>>
>>
>> > On Jun 22, 2022, at 6:38 PM, MyWorld wrote:
>> >
>> >
>> > Hi all,
>> >
&g
7:18 AM Jeff Jirsa wrote:
>>> How many rows per partition in each model?
>>>
>>>
>>> > On Jun 22, 2022, at 6:38 PM, MyWorld wrote:
>>> >
>>> >
>>> > Hi all,
>>> >
>>> > Just a small query aroun
022, at 6:38 PM, MyWorld wrote:
>> >
>> >
>> > Hi all,
>> >
>> > Just a small query around data Modelling.
>> > Suppose we have to design the data model for 2 different use cases which
>> > will query the data on same set of (partion
ach model?
>
>
> > On Jun 22, 2022, at 6:38 PM, MyWorld wrote:
> >
> >
> > Hi all,
> >
> > Just a small query around data Modelling.
> > Suppose we have to design the data model for 2 different use cases which
> will query the data on same se
7.E4E5C360]
From: MyWorld
Sent: Thursday, June 23, 2022 09:38
To: user@cassandra.apache.org
Subject: Query around Data Modelling
This message was sent from outside the company. Please do not click links or
open attachments unless you recognise the source of this email and know the
content is s
Table1 should be fine if some column values are not entered than Cassandra
will not create entry for them so partiton will almost be same in
both cases.
On Thu, Jun 23, 2022, 07:08 MyWorld wrote:
> Hi all,
>
> Just a small query around data Modelling.
> Suppose we have to design th
How many rows per partition in each model?
> On Jun 22, 2022, at 6:38 PM, MyWorld wrote:
>
>
> Hi all,
>
> Just a small query around data Modelling.
> Suppose we have to design the data model for 2 different use cases which will
> query the data on same set of (par
Hi all,
Just a small query around data Modelling.
Suppose we have to design the data model for 2 different use cases which
will query the data on same set of (partion+clustering key). So should we
maintain a seperate table for each or a single table.
Model1 - Combined table
Table(Pk,CK, col1
familiar error:
com.datastax.oss.driver.api.core.servererrors.ReadTimeoutException:
Cassandra timeout during read query at consistency LOCAL_ONE (1
responses were required but only 0 replica responded)
In the past this has been due to clocks being out of sync (not the
issue here), or a table that has
:
Cassandra timeout during read query at consistency LOCAL_ONE (1
responses were required but only 0 replica responded)
In the past this has been due to clocks being out of sync (not the issue
here), or a table that has been written to with LOCAL_ONE instead of
LOCAL_QUORUM. I don't believe
Lost task 306.3 in stage 0.0 (TID
1180) (172.16.100.39 executor 0):
com.datastax.oss.driver.api.core.DriverTimeoutException: Query
timed out after PT16M
at
com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.lambda$scheduleTimeout$1(CqlRequestHandle
lure: Lost task 306.3 in stage 0.0 (TID
1180) (172.16.100.39 executor 0):
com.datastax.oss.driver.api.core.DriverTimeoutException: Query
timed out after PT16M
at
com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.lambda$scheduleTimeout$1(CqlRequestHandle
tException: Query
timed out after PT16M
at
com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.lambda$scheduleTimeout$1(CqlRequestHandler.java:206)
at
com.datastax.oss.driver.shaded.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTime
e 0.0 (TID 1180) (172.16.100.39 executor
> 0): com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed
> out after PT16M
> at
> com.datastax.oss.driver.internal.core.cql.CqlRequestHandler.lambda$scheduleTimeout$1(CqlRequestHandler.java:206)
> at
> com.datas
ge failure: Task 306 in stage 0.0 failed 4 times, most recent
failure: Lost task 306.3 in stage 0.0 (TID 1180) (172.16.100.39 executor
0): com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed
out after PT16M
at
com.datastax.oss.driver.internal.core.cql.CqlRequestHand
of the client. I don't see
a string matching this in the Cassandra codebase itself, but I do see
that this is parseable as a Duration.
```
jshell> java.time.Duration.parse("PT2M").getSeconds()
$7 ==> 120
```
The server-side log you see is likely an indicator of the timeout
from the server
tion of the client. I don't see a
string matching this in the Cassandra codebase itself, but I do see
that this is parseable as a Duration.
```
jshell> java.time.Duration.parse("PT2M").getSeconds()
$7 ==> 120
```
The server-side log you see is likely an indicator of the timeo
n.parse("PT2M").getSeconds()$7
==> 120```The server-side log you see is likely an indicator of the timeout from the server's perspective.
You might consider checking lots from the replicas for dropped reads, query aborts due to scanning more
tombstones than the configured max, or other cond
Hi all - using a Cassandra 4.0.1 and a spark job running against a large
table (~8 billion rows) and I'm getting this error on the client side:
Query timed out after PT2M
On the server side I see a lot of messages like:
DEBUG [Native-Transport-Requests-39] 2022-02-03 14:39:56,647
;
>> DELETE FROM game.tournament USING TIMESTAMP 161692578000 WHERE
>> tournament_id = 1 AND version_id = 1 AND partition_id = 1;
>>
>>
>> Cassandra internally manages the timestamp of each column when some data
>> is updated on the same column.
>>
>>
> Cassandra internally manages the timestamp of each column when some data
> is updated on the same column.
>
>
> My Query is , *USING TIMESTAMP 161692578000* picks up a timestamp of
> which column ?
>
>
>
> CREATE TABLE game.tournament (
>
> tournament
on the same column.
My Query is , *USING TIMESTAMP 161692578000* picks up a timestamp of
which column ?
CREATE TABLE game.tournament (
tournament_id bigint,
version_id bigint,
partition_id bigint,
user_id bigint,
created_at timestamp,
rank bigint,
score bigint
doccount=doccount+? where id=?
Which runs OK.
Immediately following the update, I do:
select doccount from doc.seq where id=?
It is the above statement that is throwing the error under heavy load.
The select also frequently fails with a "No node was available to
execute the query". I w
Interestingly, I just tried creating two CqlSession objects and when I
use both instead of a single CqlSession for all queries, the 'No Node
available to execute query' no longer happens. In other words, if I
use a different CqlSession for updating the doc.seq table, it works
.
The select also frequently fails with a "No node was available to
execute the query". I wait 50mSec and retry and that typically
works. Sometimes it will retry as many as 15 times before getting a
response, but this PT1M error is new.
Running: nodetool cfstats doc.seq results in:
To
The error message is clear, it was a DriverTimeoutException, and it was
because the query timed out after one minute.
/Note: "PT1M" means a period of one minute, see
//https://en.wikipedia.org/wiki/ISO_8601#Durations
<https://en.wikipedia.org/wiki/ISO_8601#Durations>/
If yo
I'm getting this error:
com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out
after PT1M
but I can't find any documentation on this message. Anyone know what
this means? I'm updating a counter value and then doing a select from
the table. The table that I'm selecting
t;
> *From:* Bowen Song
> *Sent:* Monday, March 15, 2021 5:27 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: No node was available to execute query error
>
>
>
> There are different approaches, depending on the application's logic.
> Roughly speaking, t
@cassandra.apache.org
*Subject:* [EXTERNAL] Re: No node was available to execute query error
�
There are different approaches, depending on the application's logic.
Roughly speaking, there's two distinct scenarios:
1. Your application knows all the partition keys of the required data
query error
There are different approaches, depending on the application's logic. Roughly
speaking, there's two distinct scenarios:
1. Your application knows all the partition keys of the required data in
advance, either by reading them from another data source (e.g.: another
Cassandra table
n
records, but is that the best way?
-joe
On 3/15/2021 1:42 PM, Bowen Song wrote:
I personally try to avoid using secondary indexes, especially in
large clusters.
SI is not scalable, because a SI query doesn't have the partition key
information, Cassandra must send it to nearly all nodes in a
in large
clusters.
SI is not scalable, because a SI query doesn't have the partition key
information, Cassandra must send it to nearly all nodes in a DC to get
the answer. Thus, the more nodes you have in a cluster, the slower and
more expensive to run a SI query. Creating a SI on a table also
I personally try to avoid using secondary indexes, especially in large
clusters.
SI is not scalable, because a SI query doesn't have the partition key
information, Cassandra must send it to nearly all nodes in a DC to get
the answer. Thus, the more nodes you have in a cluster, the slower
e vs executeAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No
node was available to execute the query
       at
com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
       at
com.datastax.oss.driver.internal.core.util.conc
gnificantly* slower when running
execute vs executeAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node
was available to execute the query
at
com.datastax.oss.driver.api.core.NoNodeAvailableExc
ged that to just
execute (instead of async) and now I get no errors, no retries
anywhere. The insert is *significantly* slower when running
execute vs executeAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node
was available
uteAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node
was available to execute the query
at
com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
at
com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterru
y* slower when running execute
vs executeAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node
was available to execute the query
       at
com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailab
The highlight is "millions rows in a **single** query". Fetching that
amount of data in a single query is bad, because the Java heap memory
overhead. You can fetch millions of rows in Cassandra, just make sure
you do that over thousands or millions of queries, not one single query.
significantly* slower when running
execute vs executeAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was
available to execute the query
at
com.datastax.oss.driver.api.core.NoNodeAvailableException.cop
One question on the 'millions rows in a single query'. How would you
process that many rows? At some point, I'd like to be able to process
10-100 billion rows. Isn't that something that can be done with
Cassandra? I'm coming from HBase where we'd run map reduce jobs.
Thank you.
-Joe
) and now I get no errors, no retries
anywhere. The insert is *significantly* slower when running execute vs
executeAsync. When using executeAsync:
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was
available to execute the query
      Â
Millions rows in a single query? That sounds like a bad idea to me. Your
"NoNodeAvailableException" could be caused by stop-the-world GC pauses,
and the GC pauses are likely caused by the query itself.
On 12/03/2021 13:39, Joe Obernberger wrote:
Thank you Paul and Erick. Th
.
The full stack trace:
Error: com.datastax.oss.driver.api.core.NoNodeAvailableException: No
node was available to execute the query
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was
available to execute the query
      Â
Hi Joe
This could also be caused by the replication factor of the keyspace, if you
have NetworkTopologyStrategy and it doesn’t list a replication factor for the
datacenter datacenter1 then you will get this error message too.
Paul
> On 12 Mar 2021, at 13:07, Erick Ramirez wrote:
>
> Does
Does it get returned by the driver every single time? The
NoNodeAvailableException gets thrown when (1) all nodes are down, or (2)
all the contact points are invalid from the driver's perspective.
Is it possible there's no route/connectivity from your app server(s) to the
172.16.x.x network? If
Hi All - I'm getting this error:
Error: com.datastax.oss.driver.api.core.NoNodeAvailableException: No
node was available to execute the query
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was
available to execute the query
      Â
Hey Deepak,
"Are you suggesting to reduce the fetchSize (right now fetchSize is
5000) for this query?"
Definitely yes! If you would go with 1000 only that would give 5x more
chance to the concrete Cassandra node/nodes which is/are executing your
query to finish in time pullin
Hi Attlila,
We did have larger partitions which are now below 100MB threshold after we
ran nodetool repair. And now we do see most of the time, query runs are
running successfully but there is a small percentage of query runs which
are still failing.
Regarding your comment ```considered
e
> those details
>
> That 5 secs timeout comes from the coordinator node I think - see
> cassandra.yaml "read_request_timeout_in_ms" setting - that is influencing
> this
>
> But it does not matter too much... The point is that none of the replicas
> co
The point is that none of the
replicas could completed your query within that 5 secs. And this is a
clean indication of something is slow with your query.
Maybe 4) is a bit less important here, or I would a bit make it more
precise: considered with your fetchSize together (driver setting on the
q
Deepak,
Can you reply with:
1) The query you are trying to run.
2) The table definition (PRIMARY KEY, specifically).
3) Maybe a little description of what the table is designed to do.
4) How much data you're expecting returned (both # of rows and data size).
Thanks,
Aaron
On Mon, Sep 14
Hi There,
We are running into a strange issue in our Cassandra Cluster where one
specific query is failing with following error:
Cassandra timeout during read query at consistency QUORUM (3 responses were
required but only 0 replica responded)
This is not a typical query read timeout that we
1 - 100 of 1112 matches
Mail list logo