Thanks, Something wrong with the client.
- 原始邮件 -
发件人:Jeff Jirsa jeff.ji...@crowdstrike.com
收件人:user@cassandra.apache.org user@cassandra.apache.org, yhq...@sina.com
yhq...@sina.com
主题:Re: What's the format of Cassandra's timestamp, microsecond or millisecond?
日期:2015年08月11日 00点00分
The
Hi All,
Currently i have three hosts. The data is not balanced, one is 79G, another two
have 300GB. When I adding a new host, firstly I got too many open files
error, then i changed file open limit from 100,000 to 1, 000, 000. Then I got
OOM error.
Should I change the limits to 20,
The timestamp is arbitrary precision, selected by the client. If you’re seeing
milliseconds on some data and microseconds on others, then you have one client
that’s using microseconds and another on milliseconds – adjust your clients.
From: yhq...@sina.com
Reply-To:
My Cassandra version is 2.1.4.
Thanks
Rock
On Aug 10, 2015, at 9:52 AM, rock zhang r...@alohar.com wrote:
Hi All,
Currently i have three hosts. The data is not balanced, one is 79G, another
two have 300GB. When I adding a new host, firstly I got too many open files
error, then i
I logged the open files every 10 mins, last record is :
lsof -p $cassadnraPID | wc -l
74728
lsof |wc-l
5887913 # this is a very large number, don't know why.
After OOM the open file numbers back to few hundreds (lsof | wc -l ).
On Aug 10, 2015, at 9:59 AM, rock zhang
On behalf of the development community, I'm pleased to announce the
release of YCSB version 0.3.0.
Highlights:
* Apache Cassandra 2.0 CQL support
* Apache Accumulo 1.6 support
* Aerospike 3.5 support (new)
* Apache HBase 1.0 support
* MongoDB - support for all production versions released since