Good morning,
unfortunately my last rolling restart of our Cassandra cluster issued from
OpsCenter (5.0.2) failed. No big deal, but since then OpsCenter is showing
an error message at the top of its screen:
Error restarting cluster: Timed out waiting for Cassandra to start..
Does anybody know
- number of tombstones - how can I reliably find it out?
https://github.com/spotify/cassandra-opstools
https://github.com/cloudian/support-tools
If not getting much compression it may be worth trying to disable it, it
may contribute but its very unlikely that its the cause of the gc pressure
Hi:
I setup one node cassandra server, and using node.js driver cql to query
db.
But, when insert into table with IF NOT EXISTS statement, it report
error as below:
:ResponseError: Cannot achieve consistency level QUORUM
And, I try set nodejs cql query with consistency to ONE, still see
Hi,
We have a two-dc cluster with 21 nodes and 27 nodes in each DC. Over the
past few months, we have seen nodetool status marks 4-8 nodes down while
they are actually functioning. Particularly today we noticed that running
nodetool status on some nodes shows higher number of nodes are down than
Sorry, No - you are not doing it wrong ^)
Yes, Cassandra partitioner is based on hash ring. Doubling number of nodes is
the best cluster exctending policy I've ever seen, because it's zero-overhead.
Hashring - you get MD5 max (2^128-1), divide it by number of nodes (partitions)
getting N
On Mon, Feb 9, 2015 at 4:59 PM, Seth Edwards s...@pubnub.com wrote:
We are choosing to double our cluster from six to twelve. I ran the token
generator. Based on what I read in the documentation, I expected to see the
same first six tokens and six new tokens. Instead I see almost the same
Hi Cheng,
Are all machines configured with NTP and all clocks in sync? If that is not
the case do it.
If your clocks are not in sync it causes some weird issues like the ones
you see, but also schema disagreements and in some cases corrupted data.
Regards,
Regards,
Carlos Juzarte Rolo
I see what you are saying. So basically take whatever existing token I have
and divide it by 2, give or take a couple of tokens?
On Mon, Feb 9, 2015 at 5:17 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Feb 9, 2015 at 4:59 PM, Seth Edwards s...@pubnub.com wrote:
We are choosing to
Can you copy and example of your read and write queries? Are they both
degrading in the same way performance wise?
On Mon, Feb 9, 2015 at 8:39 PM, Laing, Michael michael.la...@nytimes.com
wrote:
Use token-awareness so you don't have as much coordinator overhead.
ml
On Mon, Feb 9, 2015 at
Hi all,
thank you all for the info.
To answer the questions:
- we have 2 DCs with 5 nodes in each, each node has 256G of memory,
24x1T drives, 2x Xeon CPU - there are multiple cassandra instances
running for different project. The node itself is powerful enough.
- there 2 keyspaces, one with 3
To clarify what Chris said, restarting opscenter will remove the
notification, but we also have a bug filed to make that behavior a little
better and allow dismissing that notification without a restart. Thanks for
reporting the issue!
-Nick
On Mon, Feb 9, 2015 at 9:00 AM, Chris Lohfink
I had considered using spark for this but:
1. we tried to deploy spark only to find out that it was missing a number
of key things we need.
2. our app needs to shut down to release threads and resources. Spark
doesn’t have support for this so all the workers would have stale thread
leaking
I am on Cassandra 1.2.19 and I am following the documentation for adding
existing nodes to a cluster
http://www.datastax.com/docs/1.1/cluster_management#adding-capacity-to-an-existing-cluster
.
We are choosing to double our cluster from six to twelve. I ran the token
generator. Based on what I
Yes, Cassandra partitioner is based on hash ring. Doubling number of nodes is
the best cluster exctending policy I've ever seen, because it's zero-overhead.
Hashring - you get MD5 max (2^128-1), divide it by number of nodes (partitions)
getting N points and then evenly distribute them across
Tom, this question would have better chances to be answered on the Node.js
driver mailing list
https://groups.google.com/a/lists.datastax.com/forum/#!forum/nodejs-driver-user
On Mon, Feb 9, 2015 at 5:38 PM, tom zs68j...@gmail.com wrote:
Hi:
I setup one node cassandra server, and using
How about delete the previous inserted lines? Then, insert it again?
*Best Regards!*
*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*
*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*
2015-02-10 9:52 GMT+08:00 Alex
Just for the record, I was doing the exact same thing in an internal
application in the start up I used to work. We have had the need of writing
custom code process in parallel all rows of a column family. Normally we would
use Spark for the job, but in our case the logic was a little more
AFAIK, if you were using RF 3 in a 3 node cluster, so all your nodes had all
your data.
When the number of nodes started to grow, this assumption stopped being true.
I think Cassandra will scale linearly from 9 nodes on, but comparing a
situation where all your nodes hold all your data is not
Stop using opscenter?
:)
Sorry, couldnt resist...
--
Colin Clark
+1 612 859 6129
Skype colin.p.clark
On Feb 9, 2015, at 3:01 AM, Björn Hachmann bjoern.hachm...@metrigo.de wrote:
Good morning,
unfortunately my last rolling restart of our Cassandra cluster issued from
OpsCenter (5.0.2)
Use token-awareness so you don't have as much coordinator overhead.
ml
On Mon, Feb 9, 2015 at 5:32 AM, Marcelo Valle (BLOOMBERG/ LONDON)
mvallemil...@bloomberg.net wrote:
AFAIK, if you were using RF 3 in a 3 node cluster, so all your nodes had
all your data.
When the number of nodes started
Restarting opscenter service will get rid of it.
Chris
On Mon, Feb 9, 2015 at 3:01 AM, Björn Hachmann bjoern.hachm...@metrigo.de
wrote:
Good morning,
unfortunately my last rolling restart of our Cassandra cluster issued from
OpsCenter (5.0.2) failed. No big deal, but since then OpsCenter is
Depending on whether you have deletes/updates, if this is an ad-hoc thing, you
might want to just read the ss tables directly.
On Feb 9, 2015, at 12:56 PM, Kevin Burton bur...@spinn3r.com wrote:
I had considered using spark for this but:
1. we tried to deploy spark only to find out that
22 matches
Mail list logo