Jim's basic model is similar to how we've solved this exact kind of problem
many times. From my own experience, I strongly recommend that you make a
`bucket` field in the partition key, and a `time` field in the cluster
key. Make both of these of data type `timestamp`. Then use application
Hi,
Cluster name should be unique because with misconfiguration you might make
the nodes connect to either of the cluster and then you will have nodes is
wrong clusters.
Theoretically it can work with same names as well but to be on the safe
side, make the cluster names unique.
Hannu
On Wed, 5
It doesn't look like the embedded driver, it should come from a zip file
labeled with version *3.7.0.post0-2481531* for cassandra 3.10:
*Using CQL driver: *
Sorry, I should have posted this example in my previous email, rather than
an example based on the non-embedded driver.
I don't know who
Hi,
I read in Cassandra architecture documentation that if a node dies and
there is some data in memtable which hasn't been written to the sstable,
the commit log replay happens (assuming the commit log had been flushed to
disk) when the node restarts and hence the data can be recovered.
Hi,
We have single node instance where we have cassandra , mysql and
application running at the same node for developers.
We are at dse 4.8.9 and dse is going down after sometime .
What we have noticed is that few of the jar at /usr/share/dse/common are
turning into 0 bytes.
Jars are as follows:
This would be normal if the switches are user to kernel mode (disk &
network IO are kernel mode activities). If your run queue (jobs waiting to
run) is much larger than the number of cores (just a swag but less than
2-3*# of cores), you might have other issues.
*...*
*Daemeon C.M.
Flushes have nothing to do with data persistence and node failure. Each
write is acknowledged only when data has been written to the commit log AND
memtable. That solves the issues of node failures and data consistency.
When the node boots back up it replays commit log files and you don't loose
You CAN have two separate clusters with same name and configuration.
Separation of the clusters is just a matter of defining seed nodes
properly. That being said, it doesn't mean you SHOULD have clusters with
same name.
We usually run same cluster name when testing on test/stage cluster and
That's an interesting refinement! I'll keep it in mind the next time this
sort of thing comes up.
Jim
On Wed, Apr 5, 2017 at 9:22 AM, Eric Stevens wrote:
> Jim's basic model is similar to how we've solved this exact kind of
> problem many times. From my own experience, I
Very good explanation.
One follow-up question. If CL is set to 1 and RF to 3, then there are
chances of the data being lost if the machine crashes before replication
happens and the commit log (on the node which processed the data for CF=1)
is not synced yet. Right?
Thanks,
Preetika
On Wed, Apr
Assuming we are using periodic mode for commit log sync.
On Wed, Apr 5, 2017 at 3:29 PM, preetika tyagi
wrote:
> Very good explanation.
> One follow-up question. If CL is set to 1 and RF to 3, then there are
> chances of the data being lost if the machine crashes before
Stefania
This is the output of my --debug, I never touched CQLSH_NO_BUNDLED and did not
know about it.
As you can see I have used homebrew to install Cassandra and looks like its the
embedded version as it sits under the Cassandra folder ?
cqlsh --debug
Using CQL driver:
Using connect
I beg to differ with @Matija here, IMO by default cassandra syncs data into
commit log in a periodic fashion with a fsync period of 10 sec (Ref -
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L361).
If a write is not written to disk and RF is 1 else CL is Local One & node
goes
Someone on my team asked me a question that I could not find an easy answer and
I was hoping someone could answer for me.
When we configure Cassandra, we use the Cluster Name, Data Center, and Rack to
define the group of Cassandra nodes involved in holding our keyspace records.
If a second set
I've noticed that my apache-cassandra 2.2.6 process is consistently performing
CPU Context Switches above 10,000 per second.
Is this to be expected or should I be looking into ways to lower the number of
context switches done on my Cassandra cluster?
Thanks in advance.
15 matches
Mail list logo