Hi,
I have a 2 DC setup(DC1:3, DC2:3). All reads and writes are at
LOCAL_QUORUM. The question is if I do reads at LOCAL_QUORUM in DC1, will
read repair happen on the replicas in DC2?
Thanks
-Raj
Hi experts,
Are there any benchmarks that quantify how long nodetool repair takes?
Something which says on this kind of hardware, with this much of data,
nodetool repair takes this long. The other question that I have is since
Cassandra recommends running nodetool repair within
I know it doesn't. But is this a valid enhancement request?
On Tue, Jul 5, 2011 at 1:32 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
On Tue, Jul 5, 2011 at 1:27 PM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
Are there any benchmarks that quantify how long nodetool repair
Do we need to do anything special to turn off-heap cache on?
https://issues.apache.org/jira/browse/CASSANDRA-1969
-Raj
, Raj N raj.cassan...@gmail.com wrote:
Do we need to do anything special to turn off-heap cache on?
https://issues.apache.org/jira/browse/CASSANDRA-1969
-Raj
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http
I had 3 nodes with strategy_options (DC1=3) in 1 DC. I added 1 more DC and 3
more nodes. I didnt set the initial token. But I ran nodetool move on the
new nodes(adding 1 to the tokens of the nodes in DC1) . I updated the
keyspace to strategy_options (DC1=3, DC2=3). Then I started running nodetool
I have a 6 node cassandra cluster DC1=3, DC2=3 with 60 GB data on each
node. I was bulk loading data over the weekend. But we forgot to turn off
the weekly nodetool repair job. As a result, repair was interfering when we
were bulk loading data. I canceled repair by restarting the nodes. But
...@gmail.comwrote:
You should run repair. If the disk space is the problem, try to cleanup
and major compact before repair.
You can limit the streaming data by running repair for each column family
separately.
maki
On 2012/04/28, at 23:47, Raj N raj.cassan...@gmail.com wrote:
I have a 6 node
Hi experts,
I have a 6 node cluster spread across 2 DCs.
DC RackStatus State LoadOwnsToken
113427455640312814857969558651062452225
DC1 RAC13 Up Normal 95.98 GB33.33% 0
DC2 RAC5Up Normal 50.79 GB
Can I infer from this that if I have 3 replicas, then running repair
without -pr won 1 node will repair the other 2 replicas as well.
-Raj
On Sat, Apr 14, 2012 at 2:54 AM, Zhu Han han...@nutstore.net wrote:
On Sat, Apr 14, 2012 at 1:57 PM, Igor i...@4friends.od.ua wrote:
Hi!
What is the
Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/05/2012, at 3:14 AM, Raj N wrote:
Hi experts,
I have a 6 node cluster spread across 2 DCs.
DC RackStatus State LoadOwnsToken
113427455640312814857969558651062452225
DC1
Hi experts,
I have a 6 node cluster across 2 DCs(DC1:3, DC2:3). I have assigned
tokens using the first strategy(adding 1) mentioned here -
http://wiki.apache.org/cassandra/Operations?#Token_selection
But when I run nodetool ring on my cluster, this is the result I get -
Address DC
AM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
I have a 6 node cluster across 2 DCs(DC1:3, DC2:3). I have assigned
tokens using the first strategy(adding 1) mentioned here -
http://wiki.apache.org/cassandra/Operations?#Token_selection
But when I run nodetool ring on my
nodes around won't delete unneeded data after
the move is done.
Try running 'nodetool cleanup' on all of your nodes.
On Fri, Jun 15, 2012 at 12:24 PM, Raj N raj.cassan...@gmail.com wrote:
Actually I am not worried about the percentage. Its the data I am
concerned
about. Look at the first node
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15, 2012 at 3:47 PM, Raj N raj.cassan...@gmail.com wrote:
I did run nodetool move. But that was when I was setting up the cluster
which means I didn't have any data at that time.
-Raj
On Fri, Jun 15
DataStax recommends not to run major compactions. Edward Capriolo's
Cassandra High Performance book suggests that major compaction is a good
thing. And should be run on a regular basis. Are there any ground rules
about running major compactions? For example, if you have write-once kind
of data
compactions, but some use cases could see some benefits
On Tue, Jun 19, 2012 at 10:51 AM, Raj N raj.cassan...@gmail.com wrote:
DataStax recommends not to run major compactions. Edward Capriolo's
Cassandra High Performance book suggests that major compaction is a good
thing. And should be run
http://www.thelastpickle.com
On 17/06/2012, at 4:06 AM, Raj N wrote:
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15, 2012 at 3:47 PM, Raj N raj.cassan...@gmail.com wrote:
I did run nodetool move. But that was when I was setting up the cluster
which
How did you solve your problem eventually? I am experiencing something
similar. Did you run cleanup on the node that has 80GB data?
-Raj
On Mon, Aug 15, 2011 at 10:12 PM, aaron morton aa...@thelastpickle.comwrote:
Just checking do you have read_repair_chance set to something ? The second
Great stuff!!!
On Tue, Jun 26, 2012 at 5:25 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
Hello all,
It has not been very long since the first book was published but
several things have been added to Cassandra and a few things have
changed. I am putting together a list of changed content,
Hi experts,
I am planning to upgrade from 0.8.4 to 1.+. Whats the latest stable
version?
Thanks
-Rajesh
Hi experts,
We are planning to deploy Cassandra in 2 datacenters. Let assume there
are 3 nodes, RF=3, 2 nodes in 1 DC and 1 node in 2nd DC. Under normal
operations, we would read and write at QUORUM. What we want to do though is
if we lose a datacenter which has 2 nodes, DC1 in this case, we
Is there a good formula to calculate heap utilization in Cassandra pre-1.1,
specifically 1.0.10. We are seeing gc pressure on our nodes. And I am
trying to estimate what could be causing this? Using node tool info my
steady state heap is at about 10GB. XMX is 12G.
I have 4.5 GB of bloom filters
We are planning to upgrade soon. But in the meantime, I wanted to see if we
can tweak certain things.
-Rajesh
On Wed, Nov 5, 2014 at 3:10 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Nov 4, 2014 at 8:51 PM, Raj N raj.cassan...@gmail.com wrote:
Is there a good formula to calculate heap
What's the latest on the maximum number of keyspaces and/or tables that one
can have in Cassandra 2.1.x?
-Raj
is reserved in heap still. Any plans to move it off-heap?
-Raj
On Tue, Nov 25, 2014 at 3:10 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Nov 25, 2014 at 9:07 AM, Raj N raj.cassan...@gmail.com wrote:
What's the latest on the maximum number of keyspaces and/or tables that
one can have
' and
event_time = '2015-01-01 00:00:00' and event_time '2015-01-02
00:00:00' and transaction_time = ''
On Sat, Feb 14, 2015 at 3:06 AM, Raj N raj.cassan...@gmail.com wrote:
Has anyone designed a bi-temporal table in Cassandra? Doesn't look like I
can do this using CQL for now. Taking the time
Has anyone designed a bi-temporal table in Cassandra? Doesn't look like I
can do this using CQL for now. Taking the time series example from well
known modeling tutorials in Cassandra -
CREATE TABLE temperatures (
weatherstation_id text,
event_time timestamp,
temperature text,
PRIMARY KEY
results and filter on the client.
hth,
dave
On 02/14/2015 06:05 PM, Raj N wrote:
I don't think thats solves my problem. The question really is why can't
we use ranges for both time columns when they are part of the primary key.
They are on 1 row after all. Is this just a CQL limitation?
-Raj
29 matches
Mail list logo