Hi,
Do we have any plans for dedicated Apache Cassandra track or sessions at
ApacheCon Berlin in Oct 2019?
CFP closes 26 May, 2019.
ThanksAnuj Wadehra
Hi Shalom,
Just a suggestion. Before upgrading to 3.11.3 make sure you are not impacted by
any open crtitical defects especially related to RT which may cause data loss
e.g.14861.
Please find my response below:
The upgrade process that I know of is from 2.0.14 to 2.1.x (higher than 2.1.9 I
Hi,
I woud like to know how people are doing rolling upgrade of Casandra clustes
when there is a change in native protocol version say from 2.1 to 3.11. During
rolling upgrade, if client application is restarted on nodes, the client driver
may first contact an upgraded Cassandra node with v4
We evaluated both 3.0.x and 3.11.x. +1 for 3.11.2 as we faced major performance
issues with 3.0.x. We have NOT evaluated new features on 3.11.x.
Anuj
Sent from Yahoo Mail on Android
On Tue, 6 Mar 2018 at 19:35, Alain RODRIGUEZ wrote:
Hello Tom,
It's good to hear this
Hi Daniel,
What is the RF and CL for Delete?Are you using asynchronous writes?Are you
firing both statements from same node sequentially?Are you firing these queries
in a loop such that more than one delete and LWT is fired for same partition?
I think if you have the same client executing both
Hi Peng,
Racks can be logical (as defined with RAC attribute in Cassandra configuration
files) or physical (racks in server rooms).
In my view, for leveraging racks in your case, its important to understand the
implication of following decisions:
1. Number of distinct logical RACs defined in
error
to the sender and then immediately delete the message.
On 25 July 2017 at 03:06, Anuj Wadehra <anujw_2...@yahoo.co.in.invalid> wrote:
Hi Peng,
Three things are important when you are evaluating fault tolerance and
availability for your cluster:
1. RF2. CL3. Topology - how data is re
Hi Peng,
Three things are important when you are evaluating fault tolerance and
availability for your cluster:
1. RF2. CL3. Topology - how data is replicated in racks.
If you assume that N nodes from ANY rack may fail at the same time, then you
can afford failure of RF-CL nodes and still be
t some number that may not be optimal.
Thank you again.
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Thursday, July 20, 2017 12:17 PM
To: ZAIDI, ASAD A <az1...@att.com>; user@cassandra.apache.org
Subject: Re: MUTATION messages were dropped in last 5000 ms fo
Hi Asad,
You can do following things:
1.Increase memtable_flush_writers especially if you have a write heavy load.
2.Make sure there are no big gc pauses on your nodes. If yes, go for heap
tuning.
Please let us know whether above measures fixed your problem or not.
ThanksAnuj
Sent from
Hi,
I have not used dc local repair specifically but generally repair syncs all
local tokens of the node with other replicas (full repair) or a subset of local
tokens (-pr and subrange). Full repair with - Dc option should only sync data
for all the tokens present on the node where the command
Hi,
I am not sure why you would want to connect clients on public interface. Are
you making db calls from clients outside the DC?
Also, not sure why you expect two DCs to communicate on private networks unless
they are two logical DCs within same physical DC.
Generally, you configure multi
Hi Jean,
Ensure that your firewall is not timing out idle connections. Nodes should time
out idle connections first (using tcp keep alive settings before firewall does
it). Please refer
http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html.
Hi Asad,
First, you need to understand the factors impacting cluster capacity. Some of
the important factors to be considered while doing capacity planning of
Cassandra are:
1. Compaction strategy: It impacts disk space requirements and IO/CPU/memory
overhead for compactions.
2. Replication
Thanks Kurt.
I think the main scenario which MUST be addressed by snapshot is Backup/Restore
so that a node can be restored with minimal time and the lengthy procedure of
boottsrapping with join_ring=false followed by full repair can be avoided. The
plain restore snapshot + repair scenario
Also, if you restore exactly same data with different IP, you may need to clear
gossip state on the node.
Anuj
Sent from Yahoo Mail on Android
On Tue, Jun 27, 2017 at 11:56 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi Nitan,
I asked for adding autobootstrap false to avoid str
Hi Nitan,
I think it would be simpler to take one node down at a time and replace it by
bringing the new node up after linux upgrade, doing same cassandra setup, using
replace_address option and setting autobootstrap=false ( as data is already
there). No downtime as it would be a rolling
Hi,
I am curious to know how people practically use Snapshot restore provided that
snapshot restore may lead to inconsistent reads until full repair is run on the
node being restored ( if you have dropped mutations in your cluster).
Example:
9 am snapshot taken on all 3 nodes10 am mutation
Hi Meg,
max_hint_window_in_ms =3 hrs means that if a node is down/unresponsive for more
than 3 hrs, hints would not be stored for it any further until it becomes
responsive again. It should not mean that already stored hints would be
truncated after 3 hours.
Regarding connection timeouts
Hi Mark,
Please ensure that the node is not defined as seed node in the yaml. Seed nodes
don't bootstrap.
ThanksAnuj
On Tue, Jun 27, 2017 at 9:56 PM, Mark Furlong wrote:
I have a node that has been decommissioned and it showed ‘UL’, the data volume
and the
y were taking more like 8-9 hours.
As I understand it, using incremental should have sped this process up as all
three sets of data on each repair job should be marked as repaired however this
does not seem to be the case. Any ideas?
Chris
On 6 Jun 2017, at 16:08, Anuj Wadehra <anujw_2...@yahoo.co.i
Hi Chris,
Using pr with incremental repairs does not make sense. Primary range repair is
an optimization over full repair. If you run full repair on a n node cluster
with RF=3, you would be repairing each data thrice. E.g. in a 5 node cluster
with RF=3, a range may exist on node A,B and C .
Ensure that all the nodes are on same schema version such that table2 schema is
replicated properly on all the nodes.
ThanksAnuj
Sent from Yahoo Mail on Android
On Sat, Mar 25, 2017 at 3:19 AM, S G wrote: Hi,
I have a keyspace with two tables.
I run a different
Hi,
What is the implication of running inc repair when all nodes have upgraded to
new Cassandra rpm but parallel upgradesstables is still running on one or more
of the nodes?
So upgrade is like:1. Rolling upgrade of all nodes (rpm install)2. Parallel
upgrade sstable on all nodes ( no issues
Hi,
Our setup is as follows:
2 DCS with N nodes, RF=DC1:3,DC2:3, Hinted Handoff=3 hours, Incremental Repair
scheduled once on every node (ALL DCs) within the gc grace period.
I have following queries regarding incremental repairs:
1. When a node is down for X hours (where x > hinted handoff
DISCLAIMER: This is only my personal opinion. Evaluate the situation carefully
and if you find below suggestions useful, follow them at your own risk.
If I have understood the problem correctly, malicious deletes would actually
lead to deletion of data. I am not sure how everything is normal
Hi Charulata,
Please share details on how data is being inserted and read.
Is the client which is reading the data same as the one which inserted it? Is
the read happening only when insertion is successful? Are you using client
timestamps?
How did you verify that NTP is working properly? How
Hi Branislav,
I quickly went through the code and noticed that you are updating RF from code
and expecting that Cassandra would automatically distribute replicas as per the
new RF. I think this is not how it works. After updating the RF, you need to
run repair on all the nodes to make sure that
Adding to what Benjamin said..
It is hard to estimate disk space if you are using STCS for a table where rows
are updated frequently leading to lot of fragmentation. STCS may also lead to
scenarios where tombstones are not evicted for long times. You may go live and
everything goes well for
17:40, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
No responses yet :)
Any C* expert who could help on join_ring use case and the concern raised?
Thanks
Anuj
On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I need to understand the use case
No responses yet :)
Any C* expert who could help on join_ring use case and the concern raised?
Thanks
Anuj
On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I need to understand the use case of join_ring=false in case of node outages.
As per
Can anyone help me with join_ring and address my concerns?
Thanks
Anuj
On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I need to understand the use case of join_ring=false in case of node outages.
As per https://issues.apache.org/jira/browse/CAS
synchronization using reliable
external servers. There is no madate to setup your own pool of internal NTP
servers for BETTER time synchronization.
Thanks for your inputs.Anuj
On Wed, 14 Dec, 2016 at 3:22 AM, Martin Schröder<mar...@oneiros.de> wrote:
2016-11-26 20:20 GMT+01:00 Anuj W
You might find more NTP experts on the NTP questions mailing list:
http://lists.ntp.org/listinfo/questions
On Tue, Dec 13, 2016 at 1:25 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
> Any NTP experts willing to take up these questions?
>
> Thanks
> Anuj
>
> On Sun,
Any NTP experts willing to take up these questions?
Thanks
Anuj
On Sun, 27 Nov, 2016 at 12:52 AM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
One popular NTP setup recommended for Cassandra users is described at
Thankshttps://blog.logentries.com/2014/03/synchronizing-
Hi,
I need to understand the use case of join_ring=false in case of node outages.
As per https://issues.apache.org/jira/browse/CASSANDRA-6961, you would want
join_ring=false when you have to repair a node before bringing a node back
after some considerable outage. The problem I see with
Hi Petr,
If data corruption means accidental data deletions via Cassandra commands, you
have to restore entire cluster with latest snapshots. This may lead to data
loss as there may be valid updates after the snapshot was taken but before the
data deletion. Restoring single node with snapshot
Hi,
One popular NTP setup recommended for Cassandra users is described at
Thankshttps://blog.logentries.com/2014/03/synchronizing-clocks-in-a-cassandra-cluster-pt-2-solutions/
.
Summary of article is:Setup recommends a dedicated pool of internal NTP servers
which are associated as peers to
Hi Boying,
I agree with Vladimir.If compaction is not compacting the two sstables with
updates soon, disk space issues will be wasted. For example, if the updates are
not closer in time, first update might be in a big table by the time second
update is being written in a new small table. STCS
11/05/apache-cassandra-synchronization/.
Ben
On Thu, 27 Oct 2016 at 10:18 Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi Ben,
Thanks for your reply. We dont use timestamps in primary key. We rely on server
side timestamps generated by coordinator. So, no functions at client side w
rouble even without leap seconds (clock drift, NTP inaccuracy
etc).
On Thu, 20 Oct 2016 at 10:30 Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
I would like to know how you guys handle leap seconds with Cassandra.
I am not bothered about the livelock issue as we are using appropri
Hi,
I would like to know how you guys handle leap seconds with Cassandra.
I am not bothered about the livelock issue as we are using appropriate versions
of Linux and Java. I am more interested in finding an optimum answer for the
following question:
How do you handle wrong ordering of multiple
Hi Mehdi,
You can refer
https://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettings.html
.
ThanksAnuj
On Mon, 17 Oct, 2016 at 10:20 PM, Mehdi Bada
wrote: Hi all,
It is exist some best practices when installing Cassandra on production
Hi Leena,
Do you have a firewall between the two DCs? If yes, connection
reset can be caused by Cassandra trying to use a TCP connection which is
already closed by the firewall. Please make sure that you set high connection
timeout at firewall. Also, make sure your servers are not overloaded.
Hi Leena,
First thing you should be concerned about is : Why the repair -pr operation
doesnt complete ?
Second comes the question : Which repair option is best?
One probable cause of stuck repairs is : if the firewall between DCs is closing
TCP connections and Cassandra is trying to use such
Hi Amir,
I would like to understand your requirement first. Why do you need multiface
interface configuration mentioned at
http://docs.datastax.com/en/cassandra/3.x/cassandra/configuration/configMultiNetworks.html
with single DC setup?
As per my understanding, you could simply set listen
sandra service helped
get rid of those files in our situation.
thanksSai
On Wed, Sep 28, 2016 at 3:15 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
We are facing an issue where Cassandra has open file handles for deleted
sstable files. These open file handles keep on increasi
Hi,
We are facing an issue where Cassandra has open file handles for deleted
sstable files. These open file handles keep on increasing with time and
eventually lead to disk crisis. This is visible via lsof command.
There are no Exceptions in logs.We are suspecting a race condition where
interface to private interface. NAT rule is needed due to CASSANDRA-9748
(No process listens on broadcast address).
ThanksAnuj
Sent from Yahoo Mail on Android
On Mon, 22 Aug, 2016 at 11:55 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
We are using PropertyFileSnitch.
ThanksAnuj
24 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>:
Hi Paulo,
I am aware of CASSANDRA-9748. It says that Cassandra only listens at
listen_address and not broadcast_address. To overcome that I can add a NAT rule
to route all traffiic on public IP to private IP.
But, why preferred IP is
of preferred IPs.
ThanksAnuj
On Sun, 21 Aug, 2016 at 7:10 PM, Paulo Motta<pauloricard...@gmail.com> wrote:
See CASSANDRA-9748, I think it might be related.
2016-08-20 15:20 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>:
Hi,
We use multiple interfaces in multi DC setup.Broad
Hi,
We use multiple interfaces in multi DC setup.Broadcast address is public IP
while listen address is private IP.
I dont understand why prefeerred IP in peers table is null for all rows.
There is very little documentation on the role of preferred IP and when it is
set. As per code TCP
Hi
Can someone take these questions?
ThanksAnuj
On Thu, 11 Aug, 2016 at 8:30 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
Setup: Cassandra 2.0.14 with PropertyFileSnitch. 2 Data Centers.
Every node has broadcast address= Public IP (bond0) & listen address=Private
Hi,
Setup: Cassandra 2.0.14 with PropertyFileSnitch. 2 Data Centers.
Every node has broadcast address= Public IP (bond0) & listen address=Private IP
(bond1).
As per DataStax docs,
(https://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configMultiNetworks.html),
"For
Hi Alain,
This caught my attention:
"Also I am not sure if the 2.2 major version is something you can skip while
upgrading through a rolling restart. I believe you can, but it is not what is
recommended."
Why do you think that skipping 2.2 is not recommended when NEWS.txt suggests
otherwise?
Hi,
We are using C* 2.0.x . What options are available if disk space is too full to
do compaction on huge sstables formed by STCS (created around long ago but not
getting compacted due to min_compaction_threshold being 4).
We suspect that huge space will be released when 2 largest sstables get
GAAJ
[2]:
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/tOWZm4RVbm4/5E_aDAc8IAAJ
On Sun, May 8, 2016 at 7:39 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi,
Which DataStax Java Driver release is most stable (production ready) for
Cassandra 2.1?
ThanksAnuj
-
lt;sean_r_dur...@homedepot.com> wrote:
https://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configLogArchive_t.html
Sean Durity
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Wednesday, April 27, 2016 10:44 PM
To: user@cassandra.apache.org
Subject: RE: Inconsis
etween the
snapshot and the crash?
Sean Durity
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Monday, April 25, 2016 10:26 PM
To: User
Subject: Inconsistent Reads after Restoring Snapshot
Hi,
We have 2.0.14. We use RF=3 and read/write at Quorum. Moreov
s) in cassandra.yaml), start the
node with -Dcassandra.join_ring=false and then run a repair
on it. Have a look at https://issues.apache.org/jira/browse/CASSANDRA-6961
Best,
Romain
Le Mardi 26 avril
2016 4h26, Anuj Wadehra <anujw_2...@yahoo.co.in> a
écrit :
Hi,
We
have
ld probably follow.
(Datastax's recommendations as well as AL tobey's
tuning guide are great resources.
https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html
)
Clint
On Apr 23, 2016 3:05
PM, "Anuj Wadehra" <anujw_2...@yahoo.co.in>
wrote:
Hi
We have a 3 no
Hi,
We have 2.0.14. We use RF=3 and read/write at Quorum. Moreover, we dont use
incremental backups. As per the documentation at
https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html
, if i need to restore a Snapshot on SINGLE node in a cluster, I
Hi Carlos,
Please check if the JIRA :
https://issues.apache.org/jira/browse/CASSANDRA-11467 fixes your problem.
We had been facing row count issue with thrift cf / compact storage and this
fixed it.
Above is fixed in latest 2.1.14. Its a two line fix. So, you can also prepare a
custom jar and
Hi,
You can set the property gc_warn_threshold_in_ms in yaml.For example, if your
application is ok with a 2000ms pause, you can set the value to 2000 such that
only gc pauses greater than 2000ms will lead to gc and status log.
Please refer
Hi
We have a 3 node cluster of 2.0.14. We use Read/Write Qorum and RF is 3. We
want to move data and commitlog directory from a SATA HDD to SSD. We have
planned to do a rolling upgrade.
We plan to run repair -pr on all nodes to sync data upfront and then execute
following steps on each server
Hi,
Can anyone take this question?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sat, 23 Apr, 2016 at 2:30 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
I think I complicated the question..so I am trying to put the question
crisply..
We have a table defined with clustering key/
, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
I have a wide row index table so that I can fetch all row keys corresponding to
a column value.
Row of index_table will look like:
ColValue1:bucket1 >> rowkey1, rowkey2.. rowkeyn..ColValue1:bucketn>>
rowkey1, rowkey2.. ro
Hi,
I have a wide row index table so that I can fetch all row keys corresponding to
a column value.
Row of index_table will look like:
ColValue1:bucket1 >> rowkey1, rowkey2.. rowkeyn..ColValue1:bucketn>>
rowkey1, rowkey2.. rowkeyn
We will have buckets to avoid hotspots. Row keys of main
maybe people
here can help you out.
-- Jack Krupansky
On Mon, Apr 11, 2016 at 10:39 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Any comments or suggestions on this one?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sun, 10 Apr, 2016 at 11:39 PM, Anuj Wadehra<anujw_2...
Any comments or suggestions on this one?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sun, 10 Apr, 2016 at 11:39 PM, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi
We are on 2.0.14 and Thrift. We are planning to migrate to CQL soon but facing
some challenges.
We have a cf with
Hi
We are on 2.0.14 and Thrift. We are planning to migrate to CQL soon but facing
some challenges.
We have a cf with a mix of statically defined columns and dynamic columns
(created at run time). For reading dynamic columns in CQL, we have two options:
1. Drop all columns and make the table
On Sun, 10 Apr, 2016 at 10:42 PM, Jeff Jirsa<jeff.ji...@crowdstrike.com>
wrote: It is possible to use OpsCenter for open source / community versions
up to 2.2.x. It will not be possible in 3.0+
From: Anuj Wadehra
Reply-To: "user@cassandra.apache.org"
Date: Sunday, April 10,
Hi,
Is it possible to use DataStax OpsCenter for monitoring Apache distributed
Cassandra in Production?
OR
Is it possible to use DataStax OpsCenter if you are not using DataStax
Enterprise in production?
ThanksAnuj
I used it with Java and there, every field of Pojo must map to column names of
the table. I think someone with Scala syntax knowledge can help you better.
ThanksAnuj
Sent from Yahoo Mail on Android
On Mon, 28 Mar, 2016 at 11:47 pm, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
W
With my limited experience with Spark, I can tell you that you need to make
sure that all columns mentioned in somecolumns must be part of CQL schema of
table.
ThanksAnuj
Sent from Yahoo Mail on Android
On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de
Hi,
I want to understand how Expiring columns work in Cassandra.
Query:Documentation says that once TTL of a column expires, tombstones are
created/ marked when the sstable gets compacted. Is there a possibility that a
query (range scan/ row query) returns expired column data just because the
Hi
We are using Spark with Cassandra. While using rdd.saveAsTextFile("/tmp/dr"),
we are getting following error when we run the application with root access.
Spark is able to create two level of directories but fails after that with
Exception:
16/03/01 22:59:48 WARN TaskSetManager: Lost task
Hi Jimmy,
We are on 2.0.x. We are planning to use JMX notifications for getting repair
status. To repair database, we call forceTableRepairPrimaryRange JMX operation
from our Java client application on each node. You can call other latest JMX
methods for repair.
I would be keen in knowing the
Hi Subharaj,
Cassandra is built to be a Fault tolerant distributed db and suitable for
building HA systems. As Cassandra provides multiple replicas for the same data,
if a single nide goes down in Production, it wont bring down the cluster.
In my opinion, if you target to start one or more
Thrift mostly)
to 5 tables, between 6-1500 rows per batch.
Mike
On Thu, Feb 18, 2016 at 12:22 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Whats the GC overhead? Can you your share your GC collector and settings ?
Whats your query pattern? Do you use secondary indexes, batches, in claus
Whats the GC overhead? Can you your share your GC collector and settings ?
Whats your query pattern? Do you use secondary indexes, batches, in clause etc?
Anuj
Sent from Yahoo Mail on Android
On Thu, 18 Feb, 2016 at 8:45 pm, Mike Heffner wrote:
Alain,
Thanks for the
Hi,
Can someone take this?
ThanksAnuj
On Mon, 8 Feb, 2016 at 11:44 pm, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi,
Setup:
We are on 2.0.14. We have some deployments with just one DC(RF:3) while others
with two DCs (RF:3,RF:3).
We ALWAYS use LOCAL_QUORUM for both
the sense of CQL static) column in your legacy
table.
Just define a Scala case class to match this table and use Spark to dump the
content to a new non compact CQL table
On Tue, Feb 2, 2016 at 7:55 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Our old table looks like this from cql
creation script
On Tue, Feb 2, 2016 at 3:48 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Thanks DuyHai !! We were also thinking to do it the "Spark" way but I was not
sure that its would be so simple :)
We have a compact storage cf with each row having some data in
s
have the SAME structure (except for the COMPACT STORAGE
clause), migration with Spark is a 2 lines of
code
On Mon, Feb 1, 2016 at 8:14
PM, Anuj Wadehra <anujw_2...@yahoo.co.in>
wrote:
Hi
Whats the fastest and reliable way
to migrate data from a Compact Storage table to Non-Comp
Carlo
"The best way to predict the future is to invent it" Alan Kay
On Fri, Jan 29, 2016 at 11:02 AM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Hi Jean,
Please make sure that your Firewall is not dropping TCP connections which are
in use. Tcp keep alive on all nodes must be
My cqlsh prompt hangs and closes if I try to fetch just 100 rows using select *
query. Cassandra-cli does the job. Any solution?
ThanksAnuj
rstand
what you mean by "dynamic columns". Given the CREATE TABLE script you gave
earlier, there is nothing such as dynamic columns
On Tue, Feb 2, 2016 at 8:01 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:
Will it be possible to read dynamic columns data from compact stor
Hi
Whats the fastest and reliable way to migrate data from a Compact Storage table
to Non-Compact storage table?
I was not able to find any command for dropping the compact storage
directive..so I think migrating data is the only way...any suggestions?
ThanksAnuj
Hi Jean,
Please make sure that your Firewall is not dropping TCP connections which are
in use. Tcp keep alive on all nodes must be less than the firewall setting.
Please refer to
https://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html
for details on TCP
Hi Lorand,
Do you see any different gc pattern during these 20 seconds?
In 2.0.x, memtable create lot of heap pressure. So in a way, reads are not
isolated from writes.
Frankly speaking, I would have accepted 20 second slowness as scaling is one
time activity. But may be your business case
, 23 Jan, 2016 at 12:16 am, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Give a deep thought on your use case. Different user tables/types may have
different purge strategy based on how frequently a user account type is usually
accessed, whats the user count for each user type
pm, Anuj Wadehra<anujw_2...@yahoo.co.in> wrote:
Hi Joseph,
I am personally in favour of Second approach because I dont want to do lot of
IO just because a user is accessing a site several times a day.
Options I see:
1.If you are on SSDs, Test LCS and update TTL of all columns at each
Hi Joseph,
I am personally in favour of Second approach because I dont want to do lot of
IO just because a user is accessing a site several times a day.
Options I see:
1.If you are on SSDs, Test LCS and update TTL of all columns at each access.
This will make sure that the system can tolerate
And I think in a 3 node cluster, RAID 0 would do the job instead of RAID 5 . So
you will need less storage to get same disk space. But you will get protection
against disk failures and infact entire node failure.
Anuj
Sent from Yahoo Mail on Android
On Sat, 23 Jan, 2016 at 10:30 am, Anuj
I think Jonathan said it earlier. You may be happy with the performance for now
as you are using the same commitlog settings that you use in large clusters.
Test the new setting recommended so that you know the real picture. Or be
prepared to lose some data in case of failure.
Other than
reated to
allow repairing ranges with down replicas with a special flag (--force). If
you're interested please add comments there and/or propose a patch.
Thanks,
Paulo
2016-01-17 1:33 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>:
Hi
We are on 2.0.14,RF=3 in a 3 node cluster. We use rep
node goes
down).
Issue https://issues.apache.org/jira/browse/CASSANDRA-10446 was created to
allow repairing ranges with down replicas with a special flag (--force). If
you're interested please add comments there and/or propose a patch.
Thanks,
Paulo
2016-01-17 1:33 GMT-03:00 Anuj Wadehra <anu
Hi
We are on 2.0.14,RF=3 in a 3 node cluster. We use repair -pr . Recently, we
observed that repair -pr for all nodes fails if a node is down. Then I found
the JIRA
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-2290
where an intentional decision was taken to abort the
Hi,
I need to understand whether all existing sstables are recreated/updated when
we change compaction strategy from STCS to DTCS?
Sstables are immutable by design but do we take an exception for such cases and
update same files when an Alter statement is fired to change the compaction
1 - 100 of 210 matches
Mail list logo