How big is each of the tables - are they all fairly small or fairly large?
Small as in no more than thousands of rows or large as in tens of millions
or hundreds of millions of rows?
Small tables are are not ideal for a Cassandra cluster since the rows would
be spread out across the nodes, even
On Wed, May 27, 2015 at 5:10 PM, Jason Unovitch jason.unovi...@gmail.com
wrote:
Simple and quick question, can anyone point me to where the Cassandra
1.2.x series EOL date was announced? I see archived mailing list
threads for 1.2.19 mentioning it was going to be the last release and
I see
On Thu, May 28, 2015 at 2:00 AM, Thomas Whiteway
thomas.white...@metaswitch.com wrote:
Sorry, I should have been clearer. In this case we’ve decommissioned
the node and deleted the data, commitlog, and saved caches directories so
we’re not hitting CASSANDRA-8801. We also hit the “A node
Mohammed,
This doesn¹t really answer your question, but I¹m working on a new REST
server that allows people to submit SQL queries over REST, which get
executed via Spark SQL. Based on what I started here:
http://brianoneill.blogspot.com/2015/05/spark-sql-against-cassandra-example.
html
I
I have a 25 noedes C* cluster with C* 2.1.3. These days a node occur split
brain many times。
check the log I found this:
INFO [MemtableFlushWriter:118] 2015-05-29 08:07:39,176
Memtable.java:378 - Completed flushing
Anybody out there using DSE + Spark SQL JDBC server?
Mohammed
From: Mohammed Guller [mailto:moham...@glassbeam.com]
Sent: Tuesday, May 26, 2015 6:17 PM
To: user@cassandra.apache.org
Subject: Spark SQL JDBC Server + DSE
Hi -
As I understand, the Spark SQL Thrift/JDBC server cannot be used with
Hello Jack,
Column families? As opposed to tables? Are you using Thrift instead of
CQL3? You should be focusing on the latter, not the former.
We have an ORM developed in our company, which maps each DTO to a column
family. So, we have many column families. We are using CQL3.
But either way,
hmm..i supposed you start with rf = 1 and then when 3n arrived, just add
into the cluster and later decomission this one node?
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_remove_node_t.html
hth
jason
On Tue, May 26, 2015 at 10:02 PM, Matthew Johnson
Sorry, I should have been clearer. In this case we’ve decommissioned the node
and deleted the data, commitlog, and saved caches directories so we’re not
hitting CASSANDRA-8801. We also hit the “A node with address address already
exists, cancelling join” error when performing the same steps
Depending on your use case and data types (for example if you can have a
minimally
Nested Json representation of the objects;
Than you could go with a common mapstring,string representation where keys
are top love object fields and values are valid Json literals as strings; eg
unquoted
Hi I'm running Cassandra 2.1.5 ,(single datacenter ,4 node,16GB vps each
node ),I have given my configuration below, I'm using python driver on my
clients ,when i tried to insert 1049067 items I got an error.
cassandra.WriteTimeout: code=1100 [Coordinator node timed out waiting for
replica
I have experienced similar results: OperationTimedOut after inserting many
millions of records on a 5 nodes cluster, using Cassandra 2.1.5.
I rolled back to 2.1.4 using identically the same configuration as with 2.1.5
and these timeout went away… This is not the solution to your problem but just
While Graham's suggestion will let you collapse a bunch of tables into a
single one, it'll likely result in so many other problems it won't be worth
the effort. I strongly advise against this approach.
First off, different workloads need different tuning. Compaction
strategies,
13 matches
Mail list logo