I guess I misspoke, sorry. It is true that count() as any other query is
still governed by the read timeout and any count that has to process a lot
of data will take a long time and will require a high timeout set to not
timeout (true of every aggregation query as it happens).
I guess I responded
+1 I also encountered timeouts many many times (using DS DevCenter).
Roughly this occured when count(*) > 1.000.000
2017-02-20 14:42 GMT+01:00 Edward Capriolo :
> Seems worth it to file a bug since some here are under the impression it
> almost always works and others are
Seems worth it to file a bug since some here are under the impression it
almost always works and others are under the impression it almost never
works.
On Friday, February 17, 2017, kurt greaves wrote:
> really... well that's good to know. it still almost never works
really... well that's good to know. it still almost never works though. i
guess every time I've seen it it must have timed out due to tombstones.
On 17 Feb. 2017 22:06, "Sylvain Lebresne" wrote:
On Fri, Feb 17, 2017 at 11:54 AM, kurt greaves wrote:
+1 for using spark for counts.
On Feb 17, 2017 4:25 PM, "kurt greaves" wrote:
> if you want a reliable count, you should use spark. performing a count (*)
> will inevitably fail unless you make your server read timeouts and
> tombstone fail thresholds ridiculous
>
> On 17
Hi,
We faced this issue too.
You could try with reduced paging size, so that tombstone threshold isn't
breached.
try using "paging 500" in cqlsh
[ https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlshPaging.html ]
Similarly paging size could be set in java driver as well
This is a work
On Fri, Feb 17, 2017 at 11:54 AM, kurt greaves wrote:
> if you want a reliable count, you should use spark. performing a count (*)
> will inevitably fail unless you make your server read timeouts and
> tombstone fail thresholds ridiculous
>
That's just not true. count(*)
if you want a reliable count, you should use spark. performing a count (*)
will inevitably fail unless you make your server read timeouts and
tombstone fail thresholds ridiculous
On 17 Feb. 2017 04:34, "Jan" wrote:
> Hi,
>
> could you post the output of nodetool cfstats for the
Hi,
could you post the output of nodetool cfstats for the table?
Cheers,
Jan
Am 16.02.2017 um 17:00 schrieb Selvam Raman:
> I am not getting count as result. Where i keep on getting n number of
> results below.
>
> Read 100 live rows and 1423 tombstone cells for query SELECT * FROM
>
I am not getting count as result. Where i keep on getting n number of
results below.
Read 100 live rows and 1423 tombstone cells for query SELECT * FROM
keysace.table WHERE token(id) > token(test:ODP0144-0883E-022R-002/047-052)
LIMIT 100 (see tombstone_warn_threshold)
On Thu, Feb 16, 2017 at
With C* 3.10
cqlsh ip --request-timeout=60
Connected to x at 10.10.10.10:9042.
[cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> USE ;
cqlsh:> SELECT count(*) from table;
count
-
3572579
On 02/16/2017 12:27 PM, Selvam
I am using cassandra 3.9.
Primary Key:
id text;
On Thu, Feb 16, 2017 at 12:25 PM, Cogumelos Maravilha <
cogumelosmaravi...@sapo.pt> wrote:
> C* version please and partition key.
>
> On 02/16/2017 12:18 PM, Selvam Raman wrote:
>
> Hi,
>
> I want to know the total records count in table.
>
> I
Hi,
I want to know the total records count in table.
I fired the below query:
select count(*) from tablename;
and i have got the below output
Read 100 live rows and 1423 tombstone cells for query SELECT * FROM
keysace.table WHERE token(id) > token(test:ODP0144-0883E-022R-002/047-052)
I would like to use select count query.
Although it was work at Cassandra 1.2.9, but there is a situation which
does not work at Cassandra 2.0.0.
so, If some row is deleted, 'select count query' seems to return the wrong
value.
Did anything change by Cassandra 2.0.0 ? or Have I made a mistake ?
14 matches
Mail list logo