nodetool upgradesstables skip major version

2015-10-30 Thread Xu Zhongxing
Can I run nodetool upgradesstables after updating a Cassandra 2.0 node directly to Cassandra 3.0? Or do I have to upgrade to 2.1 and then upgrade to 3.0?

High cpu usage when the cluster is idle

2015-10-24 Thread Xu Zhongxing
I saw an average 10% cpu usage on each node when the cassandra cluster has no load at all. I checked which thread was using the cpu, and I got the following 2 metric threads each occupying 5% cpu. jstack output: "metrics-meter-tick-thread-2" daemon prio=10 tic=...

Re:High cpu usage when the cluster is idle

2015-10-24 Thread Xu Zhongxing
The cassandra version is 2.0.12. We have 1500 tables in the cluster of 6 nodes, with a total 2.5 billion rows. 在2015年10月24 20时52分, "Xu Zhongxing"<xu_zhong_x...@163.com>写道: I saw an average 10% cpu usage on each node when the cassandra cluster has no load at all. I chec

Re:full-tabe scan - extracting all data from C*

2015-01-27 Thread Xu Zhongxing
Both Java driver select * from table and Spark sc.cassandraTable() work well. I use both of them frequently. At 2015-01-28 04:06:20, Mohammed Guller moham...@glassbeam.com wrote: Hi – Over the last few weeks, I have seen several emails on this mailing list from people trying to extract

Re: full-tabe scan - extracting all data from C*

2015-01-27 Thread Xu Zhongxing
This is hard to answer. The performance is a thing depending on context. You could tune various parameters. At 2015-01-28 14:43:38, Shenghua(Daniel) Wan wansheng...@gmail.com wrote: Cool. What about performance? e.g. how many record for how long? On Tue, Jan 27, 2015 at 10:16 PM, Xu Zhongxing

Re:Re: full-tabe scan - extracting all data from C*

2015-01-27 Thread Xu Zhongxing
suggest any API you used? Thanks. On Tue, Jan 27, 2015 at 5:33 PM, Xu Zhongxing xu_zhong_x...@163.com wrote: Both Java driver select * from table and Spark sc.cassandraTable() work well. I use both of them frequently. At 2015-01-28 04:06:20, Mohammed Guller moham...@glassbeam.com wrote: Hi

Re:full-tabe scan - extracting all data from C*

2015-01-27 Thread Xu Zhongxing
From: Xu Zhongxing [mailto:xu_zhong_x...@163.com] Sent: Tuesday, January 27, 2015 5:34 PM To:user@cassandra.apache.org Subject: Re:full-tabe scan - extracting all data from C* Both Java driver select * from table and Spark sc.cassandraTable() work well. I use both of them frequently

Re: Dynamic Columns

2015-01-20 Thread Xu Zhongxing
Maybe this is the closest thing to dynamic columns in CQL 3. create table reivew ( product_id bigint, created_at timestamp, data_key text, data_tvalue text, data_ivalue int, primary key ((priduct_id, created_at), data_key) ); data_tvalue and data_ivalue is optional.

Re:Re: Dynamic Columns

2015-01-20 Thread Xu Zhongxing
. On Tue, Jan 20, 2015 at 8:12 PM, Xu Zhongxing xu_zhong_x...@163.com wrote: Maybe this is the closest thing to dynamic columns in CQL 3. create table reivew ( product_id bigint, created_at timestamp, data_key text, data_tvalue text, data_ivalue int, primary key

Re: Dynamic Columns

2015-01-20 Thread Xu Zhongxing
at 8:50 PM, Xu Zhongxing xu_zhong_x...@163.com wrote: I approximate dynamic columns by data_key and data_value columns. Is there a better way to get dynamic columns in CQL 3? At 2015-01-21 09:41:02, Peter Lin wool...@gmail.com wrote: I think that table example misses the point of chetan's

Re: nodetool compact cannot remove tombstone in system keyspace

2015-01-13 Thread Xu Zhongxing
about the tombstone_failure_threshold, but the tombstones will only get removed during compaction if they are older than GC_Grace_Seconds for that CF. How old are these tombstones? Rahul On Jan 12, 2015, at 11:27 PM, Xu Zhongxing xu_zhong_x...@163.com wrote: Hi, When I connect to C

nodetool compact cannot remove tombstone in system keyspace

2015-01-12 Thread Xu Zhongxing
Hi, When I connect to C* with driver, I found some warnings in the log (I increased tombstone_failure_threshold to 15 to see the warning) WARN [ReadStage:5] 2015-01-13 12:21:14,595 SliceQueryFilter.java (line 225) Read 34188 live and 104186 tombstoned cells in system.schema_columns (see

Re: CQLSSTableWriter memory leak

2014-06-06 Thread Xu Zhongxing
We figured out the reason for the growing memory usage. When adding rows, if flush-to-disk operation is done in SStableSimpleUnsortedWriter.newRow(). But for the compound primary key case, when the clustering key is identical, there is no new row created. So the single huge row is kept in the

CQLSSTableWriter memory leak

2014-06-05 Thread Xu Zhongxing
I am using Cassandra's CQLSSTableWriter to import a large amount of data into Cassandra. When I use CQLSSTableWriter to write to a table with compound primary key, the memory consumption keeps growing. The GC of JVM cannot collect any used memory. When writing to tables with no compound primary

Re: CQLSSTableWriter memory leak

2014-06-05 Thread Xu Zhongxing
Is writing too many rows to a single partition the cause of memory consumption? What I want to achieve is this: say I have 5 partition ID. Each corresponds to 50 million IDs. Given a partition ID, I need to get its corresponding 50 million IDs. Is there another way to design the schema to