[jira] [Created] (CASSANDRA-4618) too small flushed sstable(less than 20M) because the wrong liveRatio

2012-09-05 Thread Cheng Zhang (JIRA)
Cheng Zhang created CASSANDRA-4618:
--

 Summary: too small flushed sstable(less than 20M) because the 
wrong liveRatio
 Key: CASSANDRA-4618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4618
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
 Environment: cassandra 1.0.10 deploy on ssd
Reporter: Cheng Zhang


After my cluster run some time, I find that there are too many small sstable 
files flushed from memtables, they are about 20M.I trace the system.log and 
find the record as below:
 WARN [MemoryMeter:1] 2012-09-04 16:04:50,392 Memtable.java (line 181) setting 
live ratio to maximum of 64 instead of Infinity

and after that, the live ratio is never changed, so after that, the memtables 
are flushed when they are not big enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4618) too small flushed sstable(less than 20M) because the wrong liveRatio

2012-09-05 Thread Cheng Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Zhang updated CASSANDRA-4618:
---

Description: 
After my cluster run some time, I find that there are too many small sstable 
files flushed from memtables, they are about 20M.I trace the system.log and 
find the record as below:
 WARN [MemoryMeter:1] 2012-09-04 16:04:50,392 Memtable.java (line 181) setting 
live ratio to maximum of 64 instead of Infinity

and after that, the live ratio is never changed, so after that, the memtables 
are flushed when they are not big enough.

and I also find this information


INFO [main] 2012-09-04 16:04:51,205 ColumnFamilyStore.java (line 705) Enqueuing 
flush of Memtable-UrlCrawlStatsCF@241601165(0/0 serialized/live bytes, 15212 
ops) 

the serialized and live bytes are both zero, that's so swired。


  was:
After my cluster run some time, I find that there are too many small sstable 
files flushed from memtables, they are about 20M.I trace the system.log and 
find the record as below:
 WARN [MemoryMeter:1] 2012-09-04 16:04:50,392 Memtable.java (line 181) setting 
live ratio to maximum of 64 instead of Infinity

and after that, the live ratio is never changed, so after that, the memtables 
are flushed when they are not big enough.


 too small flushed sstable(less than 20M) because the wrong liveRatio
 

 Key: CASSANDRA-4618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4618
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
 Environment: cassandra 1.0.10 deploy on ssd
Reporter: Cheng Zhang

 After my cluster run some time, I find that there are too many small sstable 
 files flushed from memtables, they are about 20M.I trace the system.log and 
 find the record as below:
  WARN [MemoryMeter:1] 2012-09-04 16:04:50,392 Memtable.java (line 181) 
 setting live ratio to maximum of 64 instead of Infinity
 and after that, the live ratio is never changed, so after that, the memtables 
 are flushed when they are not big enough.
 and I also find this information
 INFO [main] 2012-09-04 16:04:51,205 ColumnFamilyStore.java (line 705) 
 Enqueuing flush of Memtable-UrlCrawlStatsCF@241601165(0/0 serialized/live 
 bytes, 15212 ops) 
 the serialized and live bytes are both zero, that's so swired。

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4556) upgrade from SizeTiered to Leveled failed because no enough free space

2012-08-20 Thread Cheng Zhang (JIRA)
Cheng Zhang created CASSANDRA-4556:
--

 Summary: upgrade from SizeTiered to Leveled failed because no 
enough free space
 Key: CASSANDRA-4556
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4556
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.10
 Environment: Cassandra 1.0.10, Ubuntu 64bit server.
Reporter: Cheng Zhang


I use cassandra 1.0.10 with two data directories and 
SizeTieredCompactionStrategy first, after some time, the total free space is 
smaller than the biggest data file. At this time, I want change the compaction 
strategy to Leveled to save more space. But failed because there is no enough 
space for the biggest data file to compact.
But when I change some code, if the biggest data file can't compact, I choose 
the second biggest data file to compact, the rest can be done in the same 
manner.The biggest data file will be compact when there is enough space. As the 
compaction goes by, there will be enough space.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4557) Cassandra's startup is too slow when the data is more than 1T.

2012-08-20 Thread Cheng Zhang (JIRA)
Cheng Zhang created CASSANDRA-4557:
--

 Summary: Cassandra's startup is too slow when the data is more 
than 1T.
 Key: CASSANDRA-4557
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4557
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.10
 Environment: Cassandra 1.0.10
Reporter: Cheng Zhang
Priority: Minor


My Cassandra cluster has more than 1T data in each server. Everytime, I need to 
restart the cluster, It need much time to read the index from index file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4557) Cassandra's startup is too slow when the data is more than 1T.

2012-08-20 Thread Cheng Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437837#comment-13437837
 ] 

Cheng Zhang commented on CASSANDRA-4557:


I have a test, there is 700G data, the startup consume nearly half an hour.

 Cassandra's startup is too slow when the data is more than 1T.
 --

 Key: CASSANDRA-4557
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4557
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.10
 Environment: Cassandra 1.0.10
Reporter: Cheng Zhang
Priority: Minor

 My Cassandra cluster has more than 1T data in each server. Everytime, I need 
 to restart the cluster, It need much time to read the index from index file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4467) insufficient space for compaction when upgrade compaction strategy from SizeTiered to Leveled

2012-07-27 Thread Cheng Zhang (JIRA)
Cheng Zhang created CASSANDRA-4467:
--

 Summary: insufficient space for compaction when upgrade compaction 
strategy from SizeTiered to Leveled
 Key: CASSANDRA-4467
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4467
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
 Environment: Ubuntu,Oracle Java 1.7,Cassandra 1.0.10

Reporter: Cheng Zhang


Cassandra has two data directories as follow:
cassandra-disk0 use about 500G, about 250G free space
cassandra-disk1 use about 500G, about 250G free space
The max data file is about 400G. When I upgrade from 
SizeTieredCompactionStrategy to LeveledCompactionStrategy, there is no space to 
do this, for the free space of every data directory is small than the largest 
data file. But the total free space is enough for compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4467) insufficient space for compaction when upgrade compaction strategy from SizeTiered to Leveled

2012-07-27 Thread Cheng Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Zhang updated CASSANDRA-4467:
---

Issue Type: Improvement  (was: Bug)

 insufficient space for compaction when upgrade compaction strategy from 
 SizeTiered to Leveled
 -

 Key: CASSANDRA-4467
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4467
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.10
 Environment: Ubuntu,Oracle Java 1.7,Cassandra 1.0.10
Reporter: Cheng Zhang

 Cassandra has two data directories as follow:
 cassandra-disk0 use about 500G, about 250G free space
 cassandra-disk1 use about 500G, about 250G free space
 The max data file is about 400G. When I upgrade from 
 SizeTieredCompactionStrategy to LeveledCompactionStrategy, there is no space 
 to do this, for the free space of every data directory is small than the 
 largest data file. But the total free space is enough for compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira