5 at 1:20 AM, Robert Coli <rc...@eventbrite.com> wrote:
On Tue, Nov 17, 2015 at 4:33 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Only if gc_grace_seconds havent passed since the failure. If your machine is
down for more than gc_grace_seconds you need to delete the data directory an
Hi Chandra,
I will comment on some points. Someone else can take remaining ones:
1. Secondary Index are only useful when data returned by the index query is in
hundreds. Fetching large data using secondary index would be very slow.
Secondary indexes dont scale well.
2.token query should be
And
http://intellidzine.blogspot.in/2014/01/cassandra-data-modelling-primary-keys.html?m=1
Thanks
Anuj
Sent from Yahoo Mail on Android
From:"Anuj Wadehra" <anujw_2...@yahoo.co.in>
Date:Thu, 19 Nov, 2015 at 5:31 pm
Subject:Re: Range scans
Hi Chandra,
I will comment on some poin
n 3. As a snapshot:
- Load: 3.96, CPU wait: 30.8%, Disk Read Ops: 408/s
- Load: 5.88, CPU wait: 14.6%, Disk Read Ops: 275/s
- Load: 58.15, CPU wait: 87.0%, Disk Read Ops: 2,408/s
Can you recommend any next steps?
Griff
On 6 January 2016 at 17:31, Anuj Wadehra <anujw_2...@yahoo.co.i
Hi,
We are using C* 2.0.x . What options are available if disk space is too full to
do compaction on huge sstables formed by STCS (created around long ago but not
getting compacted due to min_compaction_threshold being 4).
We suspect that huge space will be released when 2 largest sstables get
exible)
should be stated on Apache Cassandra website
Please share your feedback.
ThanksAnuj
On Friday, 8 January 2016 12:07 AM, Robert Coli <rc...@eventbrite.com>
wrote:
On Wed, Jan 6, 2016 at 5:26 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
I would like to underst
les companies to actually make money on open source projects.
Have you considered contacting Datastax and checked their Cassandra EOL policy?
They seem to be very well aligned on what you are looking for.
http://www.datastax.com/support-policy#9
/Janne
On 07 Jan 2016, at 03:26, Anuj Wadehra <
didn't make any mention to how you manage all of your C* infrastructure.
One would hope it's via some sort of automation framework like Chef or
something to help out with some of the heavy lifting.
On Wed, Jan 6, 2016 at 8:26 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
I wou
Thanks Maciek !!
"do you have a link to the versioning policy? The tick-tock versioning blog
post [1] says that EOL happens after two major versions come out, but I can't
find this stated more formally anywhere."I couldn't find any versioning policy
related to EOL. I think it should be there on
302.23
GB 256 35.3% faa5b073-6af4-4c80-b280-e7fdd61924d3 rack1UN 3
265.02 GB 256 33.1% 74b15507-db5c-45df-81db-6e5bcb7438a3 rack1
Griff
On 13 January 2016 at 18:12, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
Revisiting the thread I can see that nodetoo
Hi,
I need to understand whether all existing sstables are recreated/updated when
we change compaction strategy from STCS to DTCS?
Sstables are immutable by design but do we take an exception for such cases and
update same files when an Alter statement is fired to change the compaction
0 0.00
Griff
On 13 January 2016 at 18:36, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Node 2 has slightly higher data but that should be ok. Not sure how read ops
are so high when no IO intensive activity such as repair and compaction is
running on node 3.May be you can try investigati
Hi
We are on 2.0.14,RF=3 in a 3 node cluster. We use repair -pr . Recently, we
observed that repair -pr for all nodes fails if a node is down. Then I found
the JIRA
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-2290
where an intentional decision was taken to abort the
Hi,
Can someone take this?
ThanksAnuj
On Mon, 8 Feb, 2016 at 11:44 pm, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
Setup:
We are on 2.0.14. We have some deployments with just one DC(RF:3) while others
with two DCs (RF:3,RF:3).
We ALWAYS use LOCAL_QUORUM for both
Hi Lorand,
Do you see any different gc pattern during these 20 seconds?
In 2.0.x, memtable create lot of heap pressure. So in a way, reads are not
isolated from writes.
Frankly speaking, I would have accepted 20 second slowness as scaling is one
time activity. But may be your business case
Hi Jean,
Please make sure that your Firewall is not dropping TCP connections which are
in use. Tcp keep alive on all nodes must be less than the firewall setting.
Please refer to
https://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html
for details on TCP
Hi
Whats the fastest and reliable way to migrate data from a Compact Storage table
to Non-Compact storage table?
I was not able to find any command for dropping the compact storage
directive..so I think migrating data is the only way...any suggestions?
ThanksAnuj
the sense of CQL static) column in your legacy
table.
Just define a Scala case class to match this table and use Spark to dump the
content to a new non compact CQL table
On Tue, Feb 2, 2016 at 7:55 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Our old table looks like this from cql
creation script
On Tue, Feb 2, 2016 at 3:48 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Thanks DuyHai !! We were also thinking to do it the "Spark" way but I was not
sure that its would be so simple :)
We have a compact storage cf with each row having some data in
s
have the SAME structure (except for the COMPACT STORAGE
clause), migration with Spark is a 2 lines of
code
On Mon, Feb 1, 2016 at 8:14
PM, Anuj Wadehra <anujw_2...@yahoo.co.in>
wrote:
Hi
Whats the fastest and reliable way
to migrate data from a Compact Storage table to Non-Comp
Carlo
"The best way to predict the future is to invent it" Alan Kay
On Fri, Jan 29, 2016 at 11:02 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi Jean,
Please make sure that your Firewall is not dropping TCP connections which are
in use. Tcp keep alive on all nodes must be
My cqlsh prompt hangs and closes if I try to fetch just 100 rows using select *
query. Cassandra-cli does the job. Any solution?
ThanksAnuj
rstand
what you mean by "dynamic columns". Given the CREATE TABLE script you gave
earlier, there is nothing such as dynamic columns
On Tue, Feb 2, 2016 at 8:01 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Will it be possible to read dynamic columns data from compact stor
Hi Jimmy,
We are on 2.0.x. We are planning to use JMX notifications for getting repair
status. To repair database, we call forceTableRepairPrimaryRange JMX operation
from our Java client application on each node. You can call other latest JMX
methods for repair.
I would be keen in knowing the
Hi Subharaj,
Cassandra is built to be a Fault tolerant distributed db and suitable for
building HA systems. As Cassandra provides multiple replicas for the same data,
if a single nide goes down in Production, it wont bring down the cluster.
In my opinion, if you target to start one or more
reated to
allow repairing ranges with down replicas with a special flag (--force). If
you're interested please add comments there and/or propose a patch.
Thanks,
Paulo
2016-01-17 1:33 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>:
Hi
We are on 2.0.14,RF=3 in a 3 node cluster. We use rep
, 23 Jan, 2016 at 12:16 am, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Give a deep thought on your use case. Different user tables/types may have
different purge strategy based on how frequently a user account type is usually
accessed, whats the user count for each user type
pm, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi Joseph,
I am personally in favour of Second approach because I dont want to do lot of
IO just because a user is accessing a site several times a day.
Options I see:
1.If you are on SSDs, Test LCS and update TTL of all columns at each
Hi Joseph,
I am personally in favour of Second approach because I dont want to do lot of
IO just because a user is accessing a site several times a day.
Options I see:
1.If you are on SSDs, Test LCS and update TTL of all columns at each access.
This will make sure that the system can tolerate
And I think in a 3 node cluster, RAID 0 would do the job instead of RAID 5 . So
you will need less storage to get same disk space. But you will get protection
against disk failures and infact entire node failure.
Anuj
Sent from Yahoo Mail on Android
On Sat, 23 Jan, 2016 at 10:30 am, Anuj
node goes
down).
Issue https://issues.apache.org/jira/browse/CASSANDRA-10446 was created to
allow repairing ranges with down replicas with a special flag (--force). If
you're interested please add comments there and/or propose a patch.
Thanks,
Paulo
2016-01-17 1:33 GMT-03:00 Anuj Wadehra <anu
I think Jonathan said it earlier. You may be happy with the performance for now
as you are using the same commitlog settings that you use in large clusters.
Test the new setting recommended so that you know the real picture. Or be
prepared to lose some data in case of failure.
Other than
Whats the GC overhead? Can you your share your GC collector and settings ?
Whats your query pattern? Do you use secondary indexes, batches, in clause etc?
Anuj
Sent from Yahoo Mail on Android
On Thu, 18 Feb, 2016 at 8:45 pm, Mike Heffner wrote:
Alain,
Thanks for the
Thrift mostly)
to 5 tables, between 6-1500 rows per batch.
Mike
On Thu, Feb 18, 2016 at 12:22 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Whats the GC overhead? Can you your share your GC collector and settings ?
Whats your query pattern? Do you use secondary indexes, batches, in claus
Any comments or suggestions on this one?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sun, 10 Apr, 2016 at 11:39 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi
We are on 2.0.14 and Thrift. We are planning to migrate to CQL soon but facing
some challenges.
We have a cf with
Hi,
Is it possible to use DataStax OpsCenter for monitoring Apache distributed
Cassandra in Production?
OR
Is it possible to use DataStax OpsCenter if you are not using DataStax
Enterprise in production?
ThanksAnuj
Hi
We are on 2.0.14 and Thrift. We are planning to migrate to CQL soon but facing
some challenges.
We have a cf with a mix of statically defined columns and dynamic columns
(created at run time). For reading dynamic columns in CQL, we have two options:
1. Drop all columns and make the table
On Sun, 10 Apr, 2016 at 10:42 PM, Jeff Jirsa<jeff.ji...@crowdstrike.com>
wrote: It is possible to use OpsCenter for open source / community versions
up to 2.2.x. It will not be possible in 3.0+
From: Anuj Wadehra
Reply-To: "user@cassandra.apache.org"
Date: Sunday, April 10,
maybe people
here can help you out.
-- Jack Krupansky
On Mon, Apr 11, 2016 at 10:39 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Any comments or suggestions on this one?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sun, 10 Apr, 2016 at 11:39 PM, Anuj Wadehra<anujw_2...
Hi,
I want to understand how Expiring columns work in Cassandra.
Query:Documentation says that once TTL of a column expires, tombstones are
created/ marked when the sstable gets compacted. Is there a possibility that a
query (range scan/ row query) returns expired column data just because the
Hi
We are using Spark with Cassandra. While using rdd.saveAsTextFile("/tmp/dr"),
we are getting following error when we run the application with root access.
Spark is able to create two level of directories but fails after that with
Exception:
16/03/01 22:59:48 WARN TaskSetManager: Lost task
With my limited experience with Spark, I can tell you that you need to make
sure that all columns mentioned in somecolumns must be part of CQL schema of
table.
ThanksAnuj
Sent from Yahoo Mail on Android
On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de
I used it with Java and there, every field of Pojo must map to column names of
the table. I think someone with Scala syntax knowledge can help you better.
ThanksAnuj
Sent from Yahoo Mail on Android
On Mon, 28 Mar, 2016 at 11:47 pm, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
W
Hi,
You can set the property gc_warn_threshold_in_ms in yaml.For example, if your
application is ok with a 2000ms pause, you can set the value to 2000 such that
only gc pauses greater than 2000ms will lead to gc and status log.
Please refer
Hi Carlos,
Please check if the JIRA :
https://issues.apache.org/jira/browse/CASSANDRA-11467 fixes your problem.
We had been facing row count issue with thrift cf / compact storage and this
fixed it.
Above is fixed in latest 2.1.14. Its a two line fix. So, you can also prepare a
custom jar and
Hi,
I have a wide row index table so that I can fetch all row keys corresponding to
a column value.
Row of index_table will look like:
ColValue1:bucket1 >> rowkey1, rowkey2.. rowkeyn..ColValue1:bucketn>>
rowkey1, rowkey2.. rowkeyn
We will have buckets to avoid hotspots. Row keys of main
s) in cassandra.yaml), start the
node with -Dcassandra.join_ring=false and then run a repair
on it. Have a look at https://issues.apache.org/jira/browse/CASSANDRA-6961
Best,
Romain
Le Mardi 26 avril
2016 4h26, Anuj Wadehra <anujw_2...@yahoo.co.in> a
écrit :
Hi,
We
have
etween the
snapshot and the crash?
Sean Durity
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Monday, April 25, 2016 10:26 PM
To: User
Subject: Inconsistent Reads after Restoring Snapshot
Hi,
We have 2.0.14. We use RF=3 and read/write at Quorum. Moreov
, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I have a wide row index table so that I can fetch all row keys corresponding to
a column value.
Row of index_table will look like:
ColValue1:bucket1 >> rowkey1, rowkey2.. rowkeyn..ColValue1:bucketn>>
rowkey1, rowkey2.. ro
Hi
We have a 3 node cluster of 2.0.14. We use Read/Write Qorum and RF is 3. We
want to move data and commitlog directory from a SATA HDD to SSD. We have
planned to do a rolling upgrade.
We plan to run repair -pr on all nodes to sync data upfront and then execute
following steps on each server
Hi,
Can anyone take this question?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sat, 23 Apr, 2016 at 2:30 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
I think I complicated the question..so I am trying to put the question
crisply..
We have a table defined with clustering key/
lt;sean_r_dur...@homedepot.com> wrote:
https://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configLogArchive_t.html
Sean Durity
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Wednesday, April 27, 2016 10:44 PM
To: user@cassandra.apache.org
Subject: RE: Inconsis
Hi,
We have 2.0.14. We use RF=3 and read/write at Quorum. Moreover, we dont use
incremental backups. As per the documentation at
https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html
, if i need to restore a Snapshot on SINGLE node in a cluster, I
ld probably follow.
(Datastax's recommendations as well as AL tobey's
tuning guide are great resources.
https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html
)
Clint
On Apr 23, 2016 3:05
PM, "Anuj Wadehra" <anujw_2...@yahoo.co.in>
wrote:
Hi
We have a 3 no
GAAJ
[2]:
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/tOWZm4RVbm4/5E_aDAc8IAAJ
On Sun, May 8, 2016 at 7:39 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
Which DataStax Java Driver release is most stable (production ready) for
Cassandra 2.1?
ThanksAnuj
-
Hi Alain,
This caught my attention:
"Also I am not sure if the 2.2 major version is something you can skip while
upgrading through a rolling restart. I believe you can, but it is not what is
recommended."
Why do you think that skipping 2.2 is not recommended when NEWS.txt suggests
otherwise?
Hi,
Setup: Cassandra 2.0.14 with PropertyFileSnitch. 2 Data Centers.
Every node has broadcast address= Public IP (bond0) & listen address=Private IP
(bond1).
As per DataStax docs,
(https://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configMultiNetworks.html),
"For
Hi
Can someone take these questions?
ThanksAnuj
On Thu, 11 Aug, 2016 at 8:30 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
Setup: Cassandra 2.0.14 with PropertyFileSnitch. 2 Data Centers.
Every node has broadcast address= Public IP (bond0) & listen address=Private
Hi Branislav,
I quickly went through the code and noticed that you are updating RF from code
and expecting that Cassandra would automatically distribute replicas as per the
new RF. I think this is not how it works. After updating the RF, you need to
run repair on all the nodes to make sure that
Adding to what Benjamin said..
It is hard to estimate disk space if you are using STCS for a table where rows
are updated frequently leading to lot of fragmentation. STCS may also lead to
scenarios where tombstones are not evicted for long times. You may go live and
everything goes well for
Hi Charulata,
Please share details on how data is being inserted and read.
Is the client which is reading the data same as the one which inserted it? Is
the read happening only when insertion is successful? Are you using client
timestamps?
How did you verify that NTP is working properly? How
of preferred IPs.
ThanksAnuj
On Sun, 21 Aug, 2016 at 7:10 PM, Paulo Motta<pauloricard...@gmail.com> wrote:
See CASSANDRA-9748, I think it might be related.
2016-08-20 15:20 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>:
Hi,
We use multiple interfaces in multi DC setup.Broad
Hi,
We use multiple interfaces in multi DC setup.Broadcast address is public IP
while listen address is private IP.
I dont understand why prefeerred IP in peers table is null for all rows.
There is very little documentation on the role of preferred IP and when it is
set. As per code TCP
24 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>:
Hi Paulo,
I am aware of CASSANDRA-9748. It says that Cassandra only listens at
listen_address and not broadcast_address. To overcome that I can add a NAT rule
to route all traffiic on public IP to private IP.
But, why preferred IP is
interface to private interface. NAT rule is needed due to CASSANDRA-9748
(No process listens on broadcast address).
ThanksAnuj
Sent from Yahoo Mail on Android
On Mon, 22 Aug, 2016 at 11:55 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
We are using PropertyFileSnitch.
ThanksAnuj
Hi,
We are facing an issue where Cassandra has open file handles for deleted
sstable files. These open file handles keep on increasing with time and
eventually lead to disk crisis. This is visible via lsof command.
There are no Exceptions in logs.We are suspecting a race condition where
sandra service helped
get rid of those files in our situation.
thanksSai
On Wed, Sep 28, 2016 at 3:15 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
We are facing an issue where Cassandra has open file handles for deleted
sstable files. These open file handles keep on increasi
Hi Leena,
First thing you should be concerned about is : Why the repair -pr operation
doesnt complete ?
Second comes the question : Which repair option is best?
One probable cause of stuck repairs is : if the firewall between DCs is closing
TCP connections and Cassandra is trying to use such
Hi,
I would like to know how you guys handle leap seconds with Cassandra.
I am not bothered about the livelock issue as we are using appropriate versions
of Linux and Java. I am more interested in finding an optimum answer for the
following question:
How do you handle wrong ordering of multiple
Hi Boying,
I agree with Vladimir.If compaction is not compacting the two sstables with
updates soon, disk space issues will be wasted. For example, if the updates are
not closer in time, first update might be in a big table by the time second
update is being written in a new small table. STCS
Hi,
One popular NTP setup recommended for Cassandra users is described at
Thankshttps://blog.logentries.com/2014/03/synchronizing-clocks-in-a-cassandra-cluster-pt-2-solutions/
.
Summary of article is:Setup recommends a dedicated pool of internal NTP servers
which are associated as peers to
11/05/apache-cassandra-synchronization/.
Ben
On Thu, 27 Oct 2016 at 10:18 Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi Ben,
Thanks for your reply. We dont use timestamps in primary key. We rely on server
side timestamps generated by coordinator. So, no functions at client side w
rouble even without leap seconds (clock drift, NTP inaccuracy
etc).
On Thu, 20 Oct 2016 at 10:30 Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
I would like to know how you guys handle leap seconds with Cassandra.
I am not bothered about the livelock issue as we are using appropri
Hi Amir,
I would like to understand your requirement first. Why do you need multiface
interface configuration mentioned at
http://docs.datastax.com/en/cassandra/3.x/cassandra/configuration/configMultiNetworks.html
with single DC setup?
As per my understanding, you could simply set listen
Hi Mehdi,
You can refer
https://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettings.html
.
ThanksAnuj
On Mon, 17 Oct, 2016 at 10:20 PM, Mehdi Bada
wrote: Hi all,
It is exist some best practices when installing Cassandra on production
Hi Leena,
Do you have a firewall between the two DCs? If yes, connection
reset can be caused by Cassandra trying to use a TCP connection which is
already closed by the firewall. Please make sure that you set high connection
timeout at firewall. Also, make sure your servers are not overloaded.
Hi,
I need to understand the use case of join_ring=false in case of node outages.
As per https://issues.apache.org/jira/browse/CASSANDRA-6961, you would want
join_ring=false when you have to repair a node before bringing a node back
after some considerable outage. The problem I see with
Any NTP experts willing to take up these questions?
Thanks
Anuj
On Sun, 27 Nov, 2016 at 12:52 AM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
One popular NTP setup recommended for Cassandra users is described at
Thankshttps://blog.logentries.com/2014/03/synchronizing-
You might find more NTP experts on the NTP questions mailing list:
http://lists.ntp.org/listinfo/questions
On Tue, Dec 13, 2016 at 1:25 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
> Any NTP experts willing to take up these questions?
>
> Thanks
> Anuj
>
> On Sun,
synchronization using reliable
external servers. There is no madate to setup your own pool of internal NTP
servers for BETTER time synchronization.
Thanks for your inputs.Anuj
On Wed, 14 Dec, 2016 at 3:22 AM, Martin Schröder<mar...@oneiros.de> wrote:
2016-11-26 20:20 GMT+01:00 Anuj W
Can anyone help me with join_ring and address my concerns?
Thanks
Anuj
On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I need to understand the use case of join_ring=false in case of node outages.
As per https://issues.apache.org/jira/browse/CAS
Hi Petr,
If data corruption means accidental data deletions via Cassandra commands, you
have to restore entire cluster with latest snapshots. This may lead to data
loss as there may be valid updates after the snapshot was taken but before the
data deletion. Restoring single node with snapshot
No responses yet :)
Any C* expert who could help on join_ring use case and the concern raised?
Thanks
Anuj
On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I need to understand the use case of join_ring=false in case of node outages.
As per
17:40, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
No responses yet :)
Any C* expert who could help on join_ring use case and the concern raised?
Thanks
Anuj
On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I need to understand the use case
Ensure that all the nodes are on same schema version such that table2 schema is
replicated properly on all the nodes.
ThanksAnuj
Sent from Yahoo Mail on Android
On Sat, Mar 25, 2017 at 3:19 AM, S G wrote: Hi,
I have a keyspace with two tables.
I run a different
Hi,
Our setup is as follows:
2 DCS with N nodes, RF=DC1:3,DC2:3, Hinted Handoff=3 hours, Incremental Repair
scheduled once on every node (ALL DCs) within the gc grace period.
I have following queries regarding incremental repairs:
1. When a node is down for X hours (where x > hinted handoff
Hi,
What is the implication of running inc repair when all nodes have upgraded to
new Cassandra rpm but parallel upgradesstables is still running on one or more
of the nodes?
So upgrade is like:1. Rolling upgrade of all nodes (rpm install)2. Parallel
upgrade sstable on all nodes ( no issues
DISCLAIMER: This is only my personal opinion. Evaluate the situation carefully
and if you find below suggestions useful, follow them at your own risk.
If I have understood the problem correctly, malicious deletes would actually
lead to deletion of data. I am not sure how everything is normal
Hi,
I have not used dc local repair specifically but generally repair syncs all
local tokens of the node with other replicas (full repair) or a subset of local
tokens (-pr and subrange). Full repair with - Dc option should only sync data
for all the tokens present on the node where the command
Hi,
I am not sure why you would want to connect clients on public interface. Are
you making db calls from clients outside the DC?
Also, not sure why you expect two DCs to communicate on private networks unless
they are two logical DCs within same physical DC.
Generally, you configure multi
Hi Asad,
You can do following things:
1.Increase memtable_flush_writers especially if you have a write heavy load.
2.Make sure there are no big gc pauses on your nodes. If yes, go for heap
tuning.
Please let us know whether above measures fixed your problem or not.
ThanksAnuj
Sent from
t some number that may not be optimal.
Thank you again.
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Thursday, July 20, 2017 12:17 PM
To: ZAIDI, ASAD A <az1...@att.com>; user@cassandra.apache.org
Subject: Re: MUTATION messages were dropped in last 5000 ms fo
Hi Peng,
Three things are important when you are evaluating fault tolerance and
availability for your cluster:
1. RF2. CL3. Topology - how data is replicated in racks.
If you assume that N nodes from ANY rack may fail at the same time, then you
can afford failure of RF-CL nodes and still be
Hi Peng,
Racks can be logical (as defined with RAC attribute in Cassandra configuration
files) or physical (racks in server rooms).
In my view, for leveraging racks in your case, its important to understand the
implication of following decisions:
1. Number of distinct logical RACs defined in
error
to the sender and then immediately delete the message.
On 25 July 2017 at 03:06, Anuj Wadehra <anujw_2...@yahoo.co.in.invalid> wrote:
Hi Peng,
Three things are important when you are evaluating fault tolerance and
availability for your cluster:
1. RF2. CL3. Topology - how data is re
Hi Asad,
First, you need to understand the factors impacting cluster capacity. Some of
the important factors to be considered while doing capacity planning of
Cassandra are:
1. Compaction strategy: It impacts disk space requirements and IO/CPU/memory
overhead for compactions.
2. Replication
Also, if you restore exactly same data with different IP, you may need to clear
gossip state on the node.
Anuj
Sent from Yahoo Mail on Android
On Tue, Jun 27, 2017 at 11:56 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi Nitan,
I asked for adding autobootstrap false to avoid str
Hi Jean,
Ensure that your firewall is not timing out idle connections. Nodes should time
out idle connections first (using tcp keep alive settings before firewall does
it). Please refer
http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html.
Thanks Kurt.
I think the main scenario which MUST be addressed by snapshot is Backup/Restore
so that a node can be restored with minimal time and the lengthy procedure of
boottsrapping with join_ring=false followed by full repair can be avoided. The
plain restore snapshot + repair scenario
Hi,
I am curious to know how people practically use Snapshot restore provided that
snapshot restore may lead to inconsistent reads until full repair is run on the
node being restored ( if you have dropped mutations in your cluster).
Example:
9 am snapshot taken on all 3 nodes10 am mutation
101 - 200 of 210 matches
Mail list logo