Hey Deepak,
"Are you suggesting to reduce the fetchSize (right now fetchSize is
5000) for this query?"
Definitely yes! If you would go with 1000 only that would give 5x more
chance to the concrete Cassandra node/nodes which is/are executing your
query to finish in time pulling together the r
Hi Attlila,
We did have larger partitions which are now below 100MB threshold after we
ran nodetool repair. And now we do see most of the time, query runs are
running successfully but there is a small percentage of query runs which
are still failing.
Regarding your comment ```considered with your
Thanks Attila and Aaron for the response. These are great insights. I will
check and get back to you in case I have any questions.
Best,
Deepak
On Tue, Sep 15, 2020 at 4:33 AM Attila Wind wrote:
> Hi Deepak,
>
> Aaron has right - in order being able to help (better) you need to share
> those de
Hi Deepak,
Aaron has right - in order being able to help (better) you need to share
those details
That 5 secs timeout comes from the coordinator node I think - see
cassandra.yaml "read_request_timeout_in_ms" setting - that is
influencing this
But it does not matter too much... The point is
Deepak,
Can you reply with:
1) The query you are trying to run.
2) The table definition (PRIMARY KEY, specifically).
3) Maybe a little description of what the table is designed to do.
4) How much data you're expecting returned (both # of rows and data size).
Thanks,
Aaron
On Mon, Sep 14, 2020