It’s going to cause a lot of compactions - this is especially true with stcs
where many of your sstables (especially the big ones) will overlap and be joined
Monitor free space (and stop compactions as needed), free memory (bloom filters
during compaction will take a big chunk as you build), and
Hi All,
My production cluster is running 2.2.8. It is used to store time series data
with only insertion with TTL, no update and deletion. From the mail lists seems
TWCS is more suitable than STCS for my use case. I'm thinking about changing
STCS to TWCS in production. I have read the
guide(htt
Hello everybody,
I am using the datastax Java driver (3.3.0).
When query large amounts of data, we set the fetch size (1) and transmit
the data to the browser on a page-by-page basis.
I am wondering if I can get the page id without receiving the real rows from
the cassandra to my server.
DataStax Enterprise (pay to license) has embedded SOLR search with Cassandra if
you don’t want to move the data to another cluster for indexing/searching.
Similar to Cassandra modeling, you will need to understand the exact search
queries in order to build the SOLR schema to support them.
The b
> On Dec 28, 2017, at 11:09 AM, Durity, Sean R
> wrote:
>
> --> See inline
>
> Hello All,
>
> We are going add 2 new nodes to our production server, there are 2 questions
> would like to have some advices?
>
> 1. In current production env, the cassandra version is 3.0.4, is it ok if we
>
--> See inline
Hello All,
We are going add 2 new nodes to our production server, there are 2 questions
would like to have some advices?
1. In current production env, the cassandra version is 3.0.4, is it ok if we
use 3.0.15 for the new node?
--> I would not do this. Streaming between versions
Have you determined if a specific query is the one getting timed out? It is
possible that the query/data model does not scale well, especially if you are
trying to do something like a full table scan.
It is also possible that your OS settings will limit the number of connections
to the host. Do
Decommission the two nodes, one at a time (assumes you have enough disk space
on the remaining hosts). That will move the data to the remaining nodes and
keep RF=3. Then fix the host. Then add the hosts back into the cluster, one at
a time. This is easier with vnodes. Finally, run clean-up on th