Hi,
If the nodes of Cassandra ring are in different timezone, could it affect
the counter column as it depends on the timestamp?
Thanks
Ajay
Compound index in MongoDB is really useful for qiery that involves
filtering/sorting on multiple columns. I was wondering if Cassandra 3.0 is
supposed to implement this feature.
When I read through JIRA, I only found feature like CASSANDRA-6048
Hi,
I am considering tuning the tombstone warn/error threshold.
Just making sure; If I INSERT one (CQL) row populating all six columns and
then DELETE the inserted row, will Cassandra write 1 range tombstone or
seven tombstones (one per columns plus row marker)?
Thanks,
Jens
If you issue DELETE my_table WHERE partition_key = xxx Cassandra will
create a row tomstone and not one tombstone per column, fortunately
On Fri, Dec 26, 2014 at 10:50 AM, Jens Rantil jens.ran...@tink.se wrote:
Hi,
I am considering tuning the tombstone warn/error threshold.
Just making
Many JIRA related to index are opened for 3.x
Global indices: https://issues.apache.org/jira/browse/CASSANDRA-6477
Functional index: https://issues.apache.org/jira/browse/CASSANDRA-7458
Partial index: https://issues.apache.org/jira/browse/CASSANDRA-7391
On Fri, Dec 26, 2014 at 10:49 AM, ziju
The global index JIRA actually mentions compound index but it seems that
there is no JIRA created for this feature? Anyway, I think I should wait
for 3.0 and see what does it bring to index. Thanks.
On Fri, Dec 26, 2014 at 6:09 PM, DuyHai Doan doanduy...@gmail.com wrote:
Many JIRA related to
Great. Also, if I issue DELETE my_table WHERE partition_key=xxx AND
compound_key=yyy I understand only a single tombstone will be created?
On Fri, Dec 26, 2014 at 10:59 AM, DuyHai Doan doanduy...@gmail.com
wrote:
If you issue DELETE my_table WHERE partition_key = xxx Cassandra will
create a
Hi, all: In my cf, each row has two column, one column is the
timestamp(64bit), another column is data which may be 500k about.
I read row, the qps is about 30. I read that data column, the qps is about
500.
Why read performance is so slow where add a so small column in read??
Thanks.
Hi, all: In my cf, each row has two column, one column is the
timestamp(64bit), another column is data which may be 500k about.
I read row, the qps is about 30. I read that data column, the qps is about
500.
Why read performance is so slow where add a so small column in read??
Thanks.
What do your CQL queries look like?
-- Jack Krupansky
On Fri, Dec 26, 2014 at 8:00 AM, yhq...@sina.com wrote:
Hi, all:
In my cf, each row has two column, one column is the timestamp(64bit),
another column is data which may be 500k about.
I read row, the qps is about 30.
I read
Timestamps are timezone independent. This is a property of timestamps, not
a property of Cassandra. A given moment is the same timestamp everywhere in
the world. To display this in a human readable form, you then need to know
what timezone you're attempting to represent the timestamp as, this is
I would suggest enabling tracing in cqlsh and see what it has to say.
There are many things which could cause this, but I'm thinking in
particular you may have a lot of tombstones which get lifted when you read
the whole row, and are missed when you read just one column.
On Fri, Dec 26, 2014 at
Hello I am new. Did not seem to find the answer after a brief research.
Please help.
Thanks!
J
Take a look at sstableloader. We use it to load 30+m rows into Cassandra
Datastax documentation is a good staty
--
Keith Sterling
Head of Software
E: keith.sterl...@first-utility.com
P: +44 7771 597 630
W: first-utility.com
A: Opus 40 Business Park,
Haywood Road, Warwick
I use thrift interface to query the data.
- -
What do your CQL queries look like?-- Jack Krupansky
On Fri, Dec 26, 2014 at 8:00 AM, yhq...@sina.com wrote:
Hi, all: In my cf, each row has two column, one column is the
timestamp(64bit), another column is data which may be 500k about.
Thank you. I did not express clearly on my question.
I wonder if there is sample code to load any website data to Cassandra?
Say, this webpage http://datatomix.com/?p=84 seems to use Python, tweepy,
to use twitter API to get data in json format and then load data into
Cassandra.
So it seems
16 matches
Mail list logo