There are 3 different things we are talking here
1. SimpleStrategy vs. NetworkTopology matter when you have single DC vs.
Multiple DC's2. In both cases you can specific replication factor, obviously in
SimpleStratgey case you dont mention DC whereas in NetworkTopology, you can
mentione mutiple
Alain great write-up on the recovery procedure. You had covered both RF factor
and Consistency levels. As mentioned two anti entropy mechanisms, hinted hand
off's and Read Repair work for temporary node outage and incremental recovery.
In case of disaster/catastrophic recovery, nodetool repair
...@datastax.com
To: user@cassandra.apache.org; Saladi Naidu naidusp2...@yahoo.com
Sent: Friday, July 10, 2015 5:25 PM
Subject: Re: DROP Table
#1 The cause of this problem is a CREATE TABLE statement collision. Do not
generate tables dynamically from multiple clients, even with IF NOT EXISTS.
First
We are running Apache Cassandra 2.1.9. In one of our Column Family, we have MAP
type column. We are seeing unusual data size of the column family (SSTables)
with few 1000's of rows, while debugging, I looked at one of the SSTable and I
see some unusual data in it
Below is JSON of one Row Key
Suppose I have a row of existing data with set of values for attributes I call
this State1, and issue an update to some columns with Quorum consistency. If
the write is succeeded in one node, Node1 and and failed on remaining nodes. As
there is no Rollback, Node1 row attributes will remain new
If going by Month as partition key then you need to duplicate the data. I dont
think going with name as partition key is good datamodel practice as it will
create a hotspot. Also I believe your queries will be mostly by employee not by
month.
You can create employee id as partition key and
My understanding is that Cassandra File Structure follows below naming
convention
/cassandra/data/ key-spaces table
Whereas our file structure is as below, each table has multiple names and when
we drop tables and recreate these directories remain. Also when we dropped the
table one
Is there a way to find out how data is distributed within column family by each
node? Nodetool provides how data is distributed across nodes that only shows
all the data by node. We are seeing heavy load on one node and I suspect that
partitioning is not distributing data equally. But to prove
We are using Level Tiered Compaction Strategy on a Column Family. Below are
CFSTATS from two nodes in same cluster, one node has 880 SStables in L0 whereas
one node just has 1 SSTable in L0. In the node where there are multiple
SStables, all of them are small size and created same time stamp.
t;n...@thelastpickle.com>
To: Cassandra Users <user@cassandra.apache.org>; Saladi Naidu
<naidusp2...@yahoo.com>
Sent: Tuesday, September 15, 2015 4:53 PM
Subject: Re: LTCS Strategy Resulting in multiple SSTables
That's an early 2.1/known buggy version. There have been several issues
We are on 2.1.2 and planning to upgrade to 2.1.9 Naidu Saladi
From: Marcus Eriksson <krum...@gmail.com>
To: user@cassandra.apache.org; Saladi Naidu <naidusp2...@yahoo.com>
Sent: Tuesday, September 15, 2015 1:53 AM
Subject: Re: LTCS Strategy Resulting in multiple SSTable
I can think of following features to solve
1. If you know the time period of after how long data should be removed then
use TTL feature2. Use Time Series to model the data and use inverted index to
query the data by time period? Naidu Saladi
On Tuesday, November 24, 2015 6:49 AM, Jack
https://issues.apache.org/jira/browse/CASSANDRA-12701 about it if interested in
following
On Wed, Oct 5, 2016 at 4:23 PM, Saladi Naidu <naidusp2...@yahoo.com> wrote:
We are seeing following warnings in system.log, As compaction_large_
partition_warning_threshold_mb in cassandra.yaml file is as d
It depends on Partition/Primary key design. In order to execute all 3 queries,
Partition Key is Org id and others are Clustering keys. if there are many org's
it will be ok, but if it is one org then a single partition will hold all the
data and its not good Naidu Saladi
On Thursday,
We are seeing following warnings in system.log, As
compaction_large_partition_warning_threshold_mb in cassandra.yaml file is as
default value 100, we are seeing these warnings
110:WARN [CompactionExecutor:91798] 2016-10-05 00:54:05,554
BigTableWriter.java:184 - Writing large partition
We are receiving following error
9140- at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.0.10.jar:3.0.10]
9141- at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
9142:WARN [SharedPool-Worker-1] 2018-09-06 14:29:46,071
Cassandra is an eventual consistent DB, how to find when a row is actually
written in multi DC environment? Here is the problem I am trying to solve
- I have multi DC (3 DC's) Cassandra cluster/ring - One of the application
wrote a row to DC1(using Local Quorum) and within span of 50 ms, it
you tried trace?
--
SIMON FONTANA OSCARSSON
Software Developer
Ericsson
Ölandsgatan 1
37133 Karlskrona, Sweden
simon.fontana.oscars...@ericsson.com
www.ericsson.com
On mån, 2018-07-09 at 19:30 +, Saladi Naidu wrote:
> Cassandra is an eventual consistent DB, how to find when a row is actua
ona, Sweden
simon.fontana.oscarsson@ ericsson.com
www.ericsson.com
On mån, 2018-07-09 at 19:30 +0000, Saladi Naidu wrote:
> Cassandra is an eventual consistent DB, how to find when a row is actually
> written in multi DC environment? Here is the problem I am trying to solve
>
> - I have m
see the query just before that
WARN message appears in log .
You can turn off the debugging once you get the info.
Good luck !!
On Mon, Sep 17, 2018 at 9:06 PM Saladi Naidu
wrote:
Any clues on this topic? Naidu Saladi
On Thursday, September 6, 2018 9:41 AM, Saladi Naidu
wrote:
We
Any clues on this topic? Naidu Saladi
On Thursday, September 6, 2018 9:41 AM, Saladi Naidu
wrote:
We are receiving following error
9140- at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.0.10.jar:3.0.10]
9141- at java.lang.Thread.run
21 matches
Mail list logo