Hello list,
I have a 2 DC set up with DC1:3, DC2:3 replication factor. DC1 has 6 nodes,
DC2 has 3. This whole setup runs on AWS, running cassandra 1.1.
Here's my nodetool ring:
1.1.1.1 eu-west 1a Up Normal 55.07 GB50.00%
0
2.2.2.1 us-east 1b Up
No. This is not going to work. The vnodes feature requires the murmur3
partitioner which was introduced with Cassandra 1.2.
Since you are currently using 1.1, you must be using the random
partitioner, which is not compatible with vnodes.
Because the partitioner determines the physical layout
Hi,
I don't know how your application works, but I explained during the last
Cassandra Summit Europe how we did the migration from relational database
to Cassandra without any interruption of service.
You can have a look at the video http://www.youtube.com/watch?v=mefOE9K7sLI
And use the
Hi Guys,
I have started understanding Cassandra and am working with it recently.
I have created two Column Families. For CF1, a write is an insert into a
unique row with all column values. Eg:
Key Col1 Col2 Col3
k1 c11 c12 c13
k2 c21 c22 c23
For CF2. a write is
What is the technical limitation that vnodes need murmer? That seems uncool
for long time users?
On Monday, December 30, 2013, Jean-Armel Luce jaluc...@gmail.com wrote:
Hi,
I don't know how your application works, but I explained during the last
Cassandra Summit Europe how we did the migration
Hi,
Random Partitioner + VNodes are a supported combo based on DataStax
documentation:
http://www.datastax.com/documentation/cassandra/1.2/webhelp/cassandra/architecture/architecturePartitionerAbout_c.html
How else would you even migrate from 1.1 to Vnodes since migration from one
partitioner to
Sorry for the misinformation. Totally forgot about that being supported
since I've never seen the combination actually used. Correct that it
should work, though.
On Dec 30, 2013 2:18 PM, Hannu Kröger hkro...@gmail.com wrote:
Hi,
Random Partitioner + VNodes are a supported combo based on
OK. Given the correction of my unfortunate partitioner error, you can, and
probably should, upgrade in place to 1.2, but with num_tokens=1 so it will
initially behave like 1.1 non vnodes would. Then you can do a rolling
conversion to more than one vnode per node, and once complete, shuffle your
Are there published best practices for managing Schema with CQL 3.0?
Say for bootstrapping the schema for a new feature?
Do folks query the system.schema_keyspaces on startup and create the necessary
schema if it doesn't exist?
Or do you have one-off scripts that create schema?
Is there a
On Mon, Dec 30, 2013 at 6:45 AM, Tupshin Harper tups...@tupshin.com wrote:
OK. Given the correction of my unfortunate partitioner error, you can,
and probably should, upgrade in place to 1.2, but with num_tokens=1 so it
will initially behave like 1.1 non vnodes would. Then you can do a
On Fri, Dec 27, 2013 at 6:13 PM, Josh Dzielak j...@keen.io wrote:
Our suspicion is that we somehow have a row level tombstone that
is future-dated and has not gone away (we’ve lowered gc_grace_seconds in
hope that it’d get compacted, but no luck so far, even though the stables
that hold the
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.0.4.
Cassandra is a highly scalable second-generation distributed database;
You can read more here:
http://cassandra.apache.org/
Downloads of source and binary distributions are listed in our download
You can add something like this to cassandra-env.sh :
JVM_OPTS=$JVM_OPTS
-Dorg.xerial.snappy.tempdir=/path/that/allows/executables
- Erik -
On 12/28/2013 08:36 AM, Edward Capriolo wrote:
Check your fstabs settings. On some systems /tmp has noexec set and
unpacking a library into temp and
I want to determine data replication latency between data centers. Is there any
metrics that is available to capture it in JConsole or other ways?
On Tue, Dec 17, 2013 at 1:46 PM, Joel Segerlind j...@kogito.se wrote:
Thanks for the info. However, wouldn't this also affect nodetool -pr (although
not as much), which I ran on the same node the other day in about 35 min?
I cannot understand how it can take 35 min for the primary range, and
On Mon, Nov 18, 2013 at 10:28 AM, Carlos Alvarez cbalva...@gmail.comwrote:
Here
http://www.datastax.com/documentation/cassandra/1.2/webhelp/cassandra/operations/ops_add_node_to_cluster_t.html
says
that it is needed to wait 2 minutes between adding nodes.
I was trying to figure out why, and
On Wed, Dec 25, 2013 at 10:01 AM, Edward Capriolo edlinuxg...@gmail.comwrote:
I have to hijack this thread. There seem to be many problems with the
2.0.3 release.
+1. There is no 2.0.x release I consider production ready, even after
today's 2.0.4.
Outside of passing all unit tests, factors
I ended up changing memtable_flush_queue_size to be large enough to contain
the biggest flood I saw.
As part of the flush process the “Switch Lock” is taken to synchronise around
the commit log. This is a reentrant Read Write lock, the flush path takes the
write lock and write path takes the
You will need to paginate the list of keys to read in your app.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 21/12/2013, at 12:58 pm, Parag Patel parag.pa...@fusionts.com wrote:
One question, which is confusing , it's a server side issue or client side?
Check the server log for errors to make sure it’s not a server side issue.
Also check if there could be something in network that is killing long lived
connections.
Check the thrift lib the client is using is the same
So now i will try to patch my cassandra 1.2.11 installation but i just wanted
to ask you guys first, if there is any other solution that does not involve a
release.
That patch in CASSANDRA-6311 is for 2.0 you cannot apply it to 1.2
but when i am using the java driver, the driver already
I wrote a small (yet untested) utility, which should be able to read SSTable
files from disk and write them into a cassandra cluster using Hector.
Consider using the SSTableSimpleUnsortedWriter (see
http://www.datastax.com/dev/blog/bulk-loading) to create the SSTables you can
then bulk load
JMX is doing it's thing on the cassandra node and is running on port 8081
Have you set the JMX port for the cluster in Ops Centre ? The default JMX port
has been 7199 for a while.
Off the top of the my head it’s in the same area where you specify the initial
nodes in the cluster, maybe behind
Check the SSTable is actually in use by cassandra, if it’s missing a component
or otherwise corrupt it will not be opened at run time and so not included in
all the fun games the other SSTables get to play.
If you have the last startup in the logs check for an “Opening… “ message or an
ERROR
Hi Aaron,
You were right. JMX is running on port 7199, it's just the web interface that's
on 8081. My mistake. But what I did was to delete my existing cluster and try
to build a new cluster within opscenter and try pointing it at my existing
cassandra node. Just one node for now, but when we
I see the SSTable in this log statement: Stream context metadata (along
with a bunch of other files)but I do not see it in the list of files
Opening (which I see quite a bit of, as expected).
Safe to try moving that file off server (to a backup location)? If I tried
this, would I want to
Hi guys,
I am using YCSB and using thrift based *client.batch_mutate()* call.
Now say opscenter reports the write requests as say 1000 *operations*/sec
when a record count is say 1 records.
OpsCenter API docs say 'Write Requests as requests per second
1 What does an 'operation or request'
27 matches
Mail list logo