Thanks, Something wrong with the client.
- 原始邮件 -
发件人:Jeff Jirsa jeff.ji...@crowdstrike.com
收件人:user@cassandra.apache.org user@cassandra.apache.org, yhq...@sina.com
yhq...@sina.com
主题:Re: What's the format of Cassandra's timestamp, microsecond or millisecond?
日期:2015年08月11日 00点00分
The
Hi, All:When I use cassandra.thrift API to manipulate the data, the
timestamp is in millisecond. It can be verified in cli:[default@ks_wwapp] get
cf_user['100031'];
= (super_column=01#uid#,
(name=0#d#, value=bf860100, timestamp=1438689394196))
When I use cli to set a column,
Hi, all:I use cassandra.thrift to implement a replace row interface in this
way:First use batch_mutate to delete that row, then use batch_mutate to
insert a new row.I always find that after call this interface, the row is
not exist.
Then I doubt that it is the problem caused by
Hi, all:I use cassandra.thrift to implement a replace row interface in this
way:First use batch_mutate to delete that row, then use batch_mutate to
insert a new row.I always find that after call this interface, the row is
not exist.
Then I doubt that it is the problem caused by
Hi, all:I use cassandra.thrift to implement a replace row interface in this
way:First use batch_mutate to delete that row, then use batch_mutate to
insert a new row.I always find that after call this interface, the row is
not exist.
Then I doubt that it is the problem caused by
Hi, all:I use cassandra.thrift to implement a replace row interface in this
way:First use batch_mutate to delete that row, then use batch_mutate to
insert a new row.I always find that after call this interface, the row is
not exist.
Then I doubt that it is the problem caused by
Hi,
I found that in my function, both delete and update use the client side
timestamp.The update timestamp should be always bigger than the deletion
timestamp.
I wonder why the update failed in some cases?
thank you.
- 原始邮件 -
发件人:Ryan Svihla r...@foundev.pro
Hi, all: In my cf, each row has two column, one column is the
timestamp(64bit), another column is data which may be 500k about.
I read row, the qps is about 30. I read that data column, the qps is about
500.
Why read performance is so slow where add a so small column in read??
Thanks.
Hi, all: In my cf, each row has two column, one column is the
timestamp(64bit), another column is data which may be 500k about.
I read row, the qps is about 30. I read that data column, the qps is about
500.
Why read performance is so slow where add a so small column in read??
Thanks.
I use thrift interface to query the data.
- -
What do your CQL queries look like?-- Jack Krupansky
On Fri, Dec 26, 2014 at 8:00 AM, yhq...@sina.com wrote:
Hi, all: In my cf, each row has two column, one column is the
timestamp(64bit), another column is data which may be 500k about.
Hi, all
I write a program to test the cassandra2.1. I have 6 nodes cluster.
First, I insert 1 million row data into cassandra. the row key from 1 to
100.
Then I run my test program. My test program first delete(use batch mutate)
the row and insert (use batch mutate) that row,
Hi, all: The test program first insert one row and then delete it, then read
it to compare. The test program run this flow row by row, not batch.
Today I found the problem is caused by the deletion timestamp. The machine
running the test program may not be time sync with cassandra
12 matches
Mail list logo