Hi David,
I'm not running Cassandra 2.0.2, but I'm used to move the data from a
Cassandra cluster with vnodes to another one.
I will do the same for backuping the cluster A.
In order to restore cluster B, I do the following steps:
1. Deploy 5 nodes as part of the cluster-B ring.
2. Create
We have an application that has been designed to use potentially 100s of
keyspaces (one for each company).
One thing we are noticing is that nodetool repair across all of the
keyspaces seems to increase linearly based on the number of keyspaces. For
example, if we have a 6 node ec2 (m1.large)
There was a bug introduced in 2.0.0-beta1 related to TTL, a patch just came
available in: https://issues.apache.org/jira/browse/CASSANDRA-6275
On Thu, Nov 7, 2013 at 5:15 AM, Murthy Chelankuri kmurt...@gmail.comwrote:
I have experimenting cassandra latest version for storing the huge the in
Hello,
I met some problems during an upgrade version, my source is Cassandra 2.0
and my target is Cassandra 2.0.2.
My cluster have 3 nodes in the same datacenter, and i will upgrade them
one by one.
So i Deploy the binaries of the new version, and configure my
cassandra.yaml with the
but from what I understand, there is no support for this in CQL3?
Whether or not there is support probably depends on your definition of
support. It is possible to do in CQL3 if that is your question.
What can be said however is that CQL3 does not consider that every type
should
have an empty
Hi all,
I'm trying to set conf directory path to Cassandra. According to [1], I can
set it using a system variable as *cassandra.config=directory *
But it doesn't seem to work for me when I give conf directory path. I get
following exception.
*[2013-11-20 22:24:38,273] ERROR
On Wed, Nov 20, 2013 at 5:44 AM, Bonnet Jonathan.
jonathan.bon...@externe.bnpparibas.com wrote:
So i Deploy the binaries of the new version, and configure my
cassandra.yaml with the same informations as before.
Why deploy binaries instead of a binary package?
=Rob
I've got a single node with all empty tables, and truncate fails with the
following error: Unable to complete request: one or more nodes were
unavailable.
Everything else seems fine. I can insert, update, delete, etc.
The only thing in the logs that looks relevant is this:
INFO
Thanks for the suggestions Aaron.
As a follow up, we ran a bunch of tests with different combinations of these
changes on a 2-node ring. The load was generated using cassandra-stress, run
with default values to write 30 million rows, and read them back.
However, for both writes and reads there
Hi all,
Is there any open source software for automatized deploy C* in PRD?
Best Regards,
Boole Guo
Software Engineer, NESC-SH.MIS
+86-021-51530666*41442
Floor 19, KaiKai Plaza, 888, Wanhangdu Rd, Shanghai (200042)
ONCE YOU KNOW, YOU NEWEGG.
CONFIDENTIALITY NOTICE: This email and any files
I had the same version upgrade path you had but using debian binary
package. Looks like it could be the java cannot find the main class, try
find out by executing ps and grep for the cassandra process, then it should
show a lot of classpath, check if you apache-cassandra-2.0.2.jar in the
Anyone?
On Tue, Nov 19, 2013 at 1:26 PM, Techy Teck comptechge...@gmail.com wrote:
Does OpsCenter support CF created using CQL? If yes, then is there any
specific version that we need to use for the OpsCenter?
Currently we have OpsCenter in production which doesn't show the tables
created
Afaik. you can't. u can only see the created tables..
On Nov 21, 2013 11:18 AM, Techy Teck comptechge...@gmail.com wrote:
Anyone?
On Tue, Nov 19, 2013 at 1:26 PM, Techy Teck comptechge...@gmail.comwrote:
Does OpsCenter support CF created using CQL? If yes, then is there any
specific
java.lang.RuntimeException: java.lang.RuntimeException: Unable to search
across multiple secondary index types
A query that used two secondary indexed columns would require query plan to
determine the most efficient approach. We don’t support features like that.
I would expect an empty
The problems occurs during the day where updates can be sent that possibly
contain older data then the nightly batch update.
If you have a an application level sequence for updates (I used that term to
avoid saying timestamp) you could use it as the cassandra timestamp. As long as
you know
The first particular test we tried
What as the disk_failure_policy setting ?
1) There were NO errors in the log on the node where we removed the commit
log SSD drive - this surprised us (of course our ops monitoring would detect
the downed disk too, but we hope to be able to look for
- broadcast_address is set to the instance's public address
You only need this if you have a multi region setup.
I’ve gisted the results here:
https://gist.github.com/skyebook/be5ee75a000a1e6d65d0
This error
TRACE [HANDSHAKE-/NODE_1_PUBLIC_IP] 2013-11-18 06:57:13,984
Dear All
The version of Cassandra is 1.2.3 in Dse 3.0 which is currently installed in my
machine. Now I want to upgrade Cassandra to latest version 2.0.
Do I have to upgrade DSE 3.0 to DSE 3.2( which is latest version available) but
still it has Cassandra version 1.2.11 which I don't want.
Or
18 matches
Mail list logo