Hi,
I have reloaded the data in my cluster of 3 nodes RF: 2.
I have loaded about 2 billion rows in one table.
I use LeveledCompactionStrategy on my table.
I use version 2.1.6.
I use the default cassandra.yaml, only the ip address for seeds and throughput
has been change.
I loaded my data with
This only applies to “select *” queries where you don’t specify the column
names.
There is a reported bug and fixed in 2.1.3. See
https://issues.apache.org/jira/browse/CASSANDRA-7910
From: joseph gao [mailto:gaojf.bok...@gmail.com]
Sent: Monday, June 15, 2015 10:52 AM
To:
Hi,
I have a cluster of 3 nodes RF: 2.
There are about 2 billion rows in one table.
I use LeveledCompactionStrategy on my table.
I use version 2.1.6.
I use the default cassandra.yaml, only the ip address for seeds and throughput
has been change.
I am have tested a scenario where one node
Hi Jean,
The problem of that Warning is that you are reading too many tombstones per
request.
If you do have Tombstones without doing DELETE it because you probably
TTL'ed the data when inserting (By mistake? Or did you set
default_time_to_live in your table?). You can use nodetool cfstats to
That is really wonderful. Thank you very much Alain. You gave me a lot of
trails to investigate. Thanks again for you help.
On 15 Jun 2015, at 17:49 , Alain RODRIGUEZ
arodr...@gmail.commailto:arodr...@gmail.com wrote:
Hi, it looks like your starting to use Cassandra.
Welcome.
I invite you to
Hi, it looks like your starting to use Cassandra.
Welcome.
I invite you to read from here as much as you can
http://docs.datastax.com/en/cassandra/2.1/cassandra/gettingStartedCassandraIntro.html
.
When a node lose some data you have various anti entropy mechanism
Hinted Handoff -- For writes
Hi Andres,
This looks awesome, many thanks for your work on this. Just out of
curiosity, how does this compare to the DSE Cassandra with embedded Solr?
Do they provide very similar functionality? Is there a list of obvious pros
and cons of one versus the other?
Thanks!
Matthew
*From:*
maybe check the system.log to see if there is any exception and/or error?
check as well if they are having consistent schema for the keyspace?
hth
jason
On Tue, Jun 16, 2015 at 7:17 AM, Michael Theroux mthero...@yahoo.com
wrote:
Hello,
We (finally) have just upgraded from Cassandra 1.1 to
You can get tombstones from inserting null values. Not sure if that’s the
problem, but it is another way of getting tombstones in your data.
On Jun 15, 2015, at 10:50 AM, Jean Tremblay
jean.tremb...@zen-innovations.commailto:jean.tremb...@zen-innovations.com
wrote:
Dear all,
I identified a
On Sat, Jun 13, 2015 at 4:39 AM, Oleksandr Petrov
oleksandr.pet...@gmail.com wrote:
We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
constant OOMs in one of our clusters.
Maybe this memory leak?
https://issues.apache.org/jira/browse/CASSANDRA-9549
=Rob
Alain great write-up on the recovery procedure. You had covered both RF factor
and Consistency levels. As mentioned two anti entropy mechanisms, hinted hand
off's and Read Repair work for temporary node outage and incremental recovery.
In case of disaster/catastrophic recovery, nodetool repair
Thanks Robert, but I don’t insert NULL values, but thanks anyway.
On 15 Jun 2015, at 19:16 , Robert Wille
rwi...@fold3.commailto:rwi...@fold3.com wrote:
You can get tombstones from inserting null values. Not sure if that’s the
problem, but it is another way of getting tombstones in your data.
Dear all,
I identified a bit more closely the root cause of my missing data.
The problem is occurring when I use
dependency
groupIdcom.datastax.cassandra/groupId
artifactIdcassandra-driver-core/artifactId
version2.1.6/version
/dependency
on my client against Cassandra 2.1.6.
I did not have
Theres your problem, you're using the DataStax java driver :) I just ran
into this issue in the last week and it was incredibly frustrating. If you
are doing a simple loop on a select * query, then the DataStax java
driver will only process 2^31 rows (e.g. the Java Integer Max
(2,147,483,647))
Thanks Bryan.
I believe I have a different problem with the Datastax 2.1.6 driver.
My problem is not that I make huge selects.
My problem seems more to occur on some inserts. I inserts MANY rows and with
the version 2.1.6 of the driver I seem to be loosing some records.
But thanks anyway I will
Currently on 2.1.6 I'm seeing behavior like the following:
cqlsh:walker select * from counter_table where field = 'test';
field | value
---+---
test |30
(1 rows)
cqlsh:walker select * from counter_table where field = 'test';
field | value
---+---
test |90
(1 rows)
Hello,
We (finally) have just upgraded from Cassandra 1.1 to Cassandra 1.2.19.
Everything appears to be up and running normally, however, we have noticed
unusual output from nodetool ring. There is a new (to us) field Replicas in
the nodetool output, and this field, seemingly at random, is
On Mon, Jun 15, 2015 at 2:52 PM, Dan Kinder dkin...@turnitin.com wrote:
Potentially relevant facts:
- Recently upgraded to 2.1.6 from 2.0.14
- This table has ~million rows, low contention, and fairly high increment
rate
Can you repro on a counter that was created after the upgrade?
Mainly
hi, all
I'm using PrepareStatement. If I prepare a sql everytime I use,
cassandra will give me a warning tell me NOT PREPARE EVERYTIME. So I Cache
the PrepareStatement locally . But when other client change the table's
schema, like, add a new Column, If I still use the former Cached
19 matches
Mail list logo