Re: terrible read/write latency fluctuation

2015-11-02 Thread
Thanks all of u.

--
Ranger Tsao

2015-10-30 18:25 GMT+08:00 Anishek Agarwal <anis...@gmail.com>:

> if its some sort of timeseries DTCS might turn out to be better for
> compaction. also some disk monitoring might help to understand if disk is
> the bottleneck.
>
> On Sun, Oct 25, 2015 at 3:47 PM, 曹志富 <cao.zh...@gmail.com> wrote:
>
>> I will try to trace a read that take > 20msec
>> .
>>
>> just HDD.no delete just 60days ttl.value size is small ,max length is 140.
>>
>>
>> My data like Time Series . date of 90% reads which timestamp < 7days.
>> data almost just insert ,with a lit update.
>>
>
>


Re: terrible read/write latency fluctuation

2015-10-25 Thread
I will try to trace a read that take > 20msec
.

just HDD.no delete just 60days ttl.value size is small ,max length is 140.


My data like Time Series . date of 90% reads which timestamp < 7days. data
almost just insert ,with a lit update.


CqlOutputFormat with auth

2015-10-23 Thread
Hadoop2.6,cassandra2.1.6. Here is the exception stack:


Error: java.lang.RuntimeException: InvalidRequestException(why:You have not
logged in)
at
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:121)
at
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:88)
at
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:74)
at
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:55)
at
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.(ReduceTask.java:540)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: InvalidRequestException(why:You have not logged in)
at
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result$execute_cql3_query_resultStandardScheme.read(Cassandra.java:49032)
at
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result$execute_cql3_query_resultStandardScheme.read(Cassandra.java:49009)
at
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:48924)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1693)
at
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1678)
at
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:335)
at
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:106)
... 11 more


this issue(https://issues.apache.org/jira/browse/CASSANDRA-7340) seems
still not complete fixed.



--
Ranger Tsao


Re: unusual GC log

2015-10-20 Thread
C* version is 2.1.6.
CentOS release 6.5 (Final)
Sun JDK 1.7.0_71 64bit.

attach is my config setting.

Thank you very much!!!

--
Ranger Tsao

2015-10-20 15:43 GMT+08:00 Graham Sanderson <gra...@vast.com>:

> What version of C* are you running? any special settings in
> cassandra.yaml; are you running with stock GC settings in cassandra-env.sh?
> what JDK/OS?
>
> On Oct 19, 2015, at 11:40 PM, 曹志富 <cao.zh...@gmail.com> wrote:
>
> INFO  [Service Thread] 2015-10-20 10:42:47,854 GCInspector.java:252 -
> ParNew GC in 476ms.  CMS Old Gen: 4288526240 -> 4725514832; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:42:50,870 GCInspector.java:252 -
> ParNew GC in 423ms.  CMS Old Gen: 4725514832 -> 5114687560; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:42:53,847 GCInspector.java:252 -
> ParNew GC in 406ms.  CMS Old Gen: 5114688368 -> 5513119264; Par Eden
> Space: 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:42:57,118 GCInspector.java:252 -
> ParNew GC in 421ms.  CMS Old Gen: 5513119264 -> 5926324736; Par Eden
> Space: 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:00,041 GCInspector.java:252 -
> ParNew GC in 437ms.  CMS Old Gen: 5926324736 -> 6324793584; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:03,029 GCInspector.java:252 -
> ParNew GC in 429ms.  CMS Old Gen: 6324793584 -> 6693672608; Par Eden
> Space: 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:05,566 GCInspector.java:252 -
> ParNew GC in 339ms.  CMS Old Gen: 6693672608 -> 6989128592; Par Eden
> Space: 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:08,431 GCInspector.java:252 -
> ParNew GC in 421ms.  CMS Old Gen: 6266493464 -> 6662041272; Par Eden
> Space: 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:11,131 GCInspector.java:252 -
> ConcurrentMarkSweep GC in 215ms.  CMS Old Gen: 5926324736 -> 4574418480;
> CMS Perm Gen: 33751256 -> 33751192
> ; Par Eden Space: 7192 -> 611360336;
> INFO  [Service Thread] 2015-10-20 10:43:11,848 GCInspector.java:252 -
> ParNew GC in 511ms.  CMS Old Gen: 4574418480 -> 4996166672; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:14,915 GCInspector.java:252 -
> ParNew GC in 395ms.  CMS Old Gen: 4996167912 -> 5380926744; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:18,335 GCInspector.java:252 -
> ParNew GC in 432ms.  CMS Old Gen: 5380926744 -> 5811659120; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:21,492 GCInspector.java:252 -
> ParNew GC in 439ms.  CMS Old Gen: 5811659120 -> 6270861936; Par Eden Space:
> 671088640 -> 0;
> INFO  [Service Thread] 2015-10-20 10:43:24,698 GCInspector.java:252 -
> ParNew GC in 490ms.  CMS Old Gen: 6270861936 -> 6668734208; Par Eden Space:
> 671088640 -> 0; Par Survivor Sp
> ace: 83886080 -> 83886072
> INFO  [Service Thread] 2015-10-20 10:43:27,963 GCInspector.java:252 -
> ParNew GC in 457ms.  CMS Old Gen: 6668734208 -> 7072885208; Par Eden
> Space: 671088640 -> 0; Par Survivor Sp
> ace: 83886072 -> 83886080
>
> after seconds node mark down.
>
> My node config is : 8GB heap NEW_HEAP size is 800MB
>
> NODE hardware is :4CORE 32GBRAM
>
> --
> Ranger Tsao
>
>
>


cassandra.yaml
Description: Binary data


cassandra-env.sh
Description: Bourne shell script


unusual GC log

2015-10-19 Thread
INFO  [Service Thread] 2015-10-20 10:42:47,854 GCInspector.java:252 -
ParNew GC in 476ms.  CMS Old Gen: 4288526240 -> 4725514832; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:42:50,870 GCInspector.java:252 -
ParNew GC in 423ms.  CMS Old Gen: 4725514832 -> 5114687560; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:42:53,847 GCInspector.java:252 -
ParNew GC in 406ms.  CMS Old Gen: 5114688368 -> 5513119264; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:42:57,118 GCInspector.java:252 -
ParNew GC in 421ms.  CMS Old Gen: 5513119264 -> 5926324736; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:00,041 GCInspector.java:252 -
ParNew GC in 437ms.  CMS Old Gen: 5926324736 -> 6324793584; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:03,029 GCInspector.java:252 -
ParNew GC in 429ms.  CMS Old Gen: 6324793584 -> 6693672608; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:05,566 GCInspector.java:252 -
ParNew GC in 339ms.  CMS Old Gen: 6693672608 -> 6989128592; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:08,431 GCInspector.java:252 -
ParNew GC in 421ms.  CMS Old Gen: 6266493464 -> 6662041272; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:11,131 GCInspector.java:252 -
ConcurrentMarkSweep GC in 215ms.  CMS Old Gen: 5926324736 -> 4574418480;
CMS Perm Gen: 33751256 -> 33751192
; Par Eden Space: 7192 -> 611360336;
INFO  [Service Thread] 2015-10-20 10:43:11,848 GCInspector.java:252 -
ParNew GC in 511ms.  CMS Old Gen: 4574418480 -> 4996166672; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:14,915 GCInspector.java:252 -
ParNew GC in 395ms.  CMS Old Gen: 4996167912 -> 5380926744; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:18,335 GCInspector.java:252 -
ParNew GC in 432ms.  CMS Old Gen: 5380926744 -> 5811659120; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:21,492 GCInspector.java:252 -
ParNew GC in 439ms.  CMS Old Gen: 5811659120 -> 6270861936; Par Eden Space:
671088640 -> 0;
INFO  [Service Thread] 2015-10-20 10:43:24,698 GCInspector.java:252 -
ParNew GC in 490ms.  CMS Old Gen: 6270861936 -> 6668734208; Par Eden Space:
671088640 -> 0; Par Survivor Sp
ace: 83886080 -> 83886072
INFO  [Service Thread] 2015-10-20 10:43:27,963 GCInspector.java:252 -
ParNew GC in 457ms.  CMS Old Gen: 6668734208 -> 7072885208; Par Eden Space:
671088640 -> 0; Par Survivor Sp
ace: 83886072 -> 83886080

after seconds node mark down.

My node config is : 8GB heap NEW_HEAP size is 800MB

NODE hardware is :4CORE 32GBRAM

--
Ranger Tsao


Re: abnormal log after remove a node

2015-09-01 Thread
Just restart all of the c* node

--
Ranger Tsao

2015-08-25 18:17 GMT+08:00 Alain RODRIGUEZ <arodr...@gmail.com>:

> Hi, I am facing the same issue on 2.0.16.
>
> Did you solve this ? How ?
>
> I plan to try a rolling restart and see if gossip state recover from this.
>
> C*heers,
>
> Alain
>
> 2015-06-19 11:40 GMT+02:00 曹志富 <cao.zh...@gmail.com>:
>
>> I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node from
>> this cluster using nodetool decommission.
>>
>> But tody I find some log like this:
>>
>> INFO  [GossipStage:1] 2015-06-19 17:38:05,616 Gossiper.java:968 -
>> InetAddress /172.19.105.41 is now DOWN
>> INFO  [GossipStage:1] 2015-06-19 17:38:05,617 StorageService.java:1885 -
>> Removing tokens [-1014432261309809702, -1055322450438958612,
>> -1120728727235087395, -1191392141261832305, -1203676771883970142,
>> -1215563040745505837, -1215648909329054362, -1269531760567530381,
>> -1278047879489577908, -1313427877031136549, -1342822572958042617,
>> -1350792764922315814, -1383390744017639599, -139000372807970456,
>> -140827955201469664, -1631551789771606023, -1633789813430312609,
>> -1795528665156349205, -1836619444785023397, -1879127294549041822,
>> -1962337787208890426, -2022309807234530256, -2033402140526360327,
>> -2089413865145942100, -210961549458416802, -2148530352195763113,
>> -2184481573787758786, -610790268720205, -2340762266634834427,
>> -2513416003567685694, -2520971378752190013, -2596695976621541808,
>> -2620636796023437199, -2640378596436678113, -2679143017361311011,
>> -2721176590519112233, -2749213392354746126, -279267896827516626,
>> -2872377759991294853, -2904711688111888325, -290489381926812623,
>> -3000574339499272616, -301428600802598523, -3019280155316984595,
>> -3024451041907074275, -3056898917375012425, -3161300347260716852,
>> -3166392383659271772, -3327634380871627036, -3530685865340274372,
>> -3563112657791369745, -366930313427781469, -3729582520450700795,
>> -3901838244986519991, -4065326606010524312, -4174346928341550117,
>> -4184239233207315432, -4204369933734181327, -4206479093137814808,
>> -421410317165821100, -4311166118017934135, -4407123461118340117,
>> -4466364858622123151, -4466939645485100087, -448955147512581975,
>> -4587780638857304626, -4649897584350376674, -4674234125365755024
>> , -4833801201210885896, -4857586579802212277, -4868896650650107463,
>> -4980063310159547694, -4983471821416248610, -4992846054037653676,
>> -5026994389965137674, -514302500353679181
>> 0, -5198414516309928594, -5245363745777287346, -5346838390293957674,
>> -5374413419545696184, -5427881744040857637, -5453876964430787287,
>> -5491923669475601173, -55219734138599212
>> 6, -5523011502670737422, -5537121117160410549, -5557015938925208697,
>> -5572489682738121748, -5745899409803353484, -5771239101488682535,
>> -5893479791287484099, -59766730414807540
>> 44, -6014643892406938367, -6086002438656595783, -6129360679394503700,
>> -6224240257573911174, -6290393495130499466, -6378712056928268929,
>> -6430306056990093461, -6800188263839065
>> 013, -6912720411187525051, -7160327814305587432, -7175004328733776324,
>> -7272070430660252577, -7307945744786025148, -742448651973108101,
>> -7539255117639002578, -7657460716997978
>> 94, -7846698077070579798, -7870621904906244395, -7900841391761900719,
>> -7918145426423910061, -7936795453892692473, -8070255024778921411,
>> -8086888710627677669, -8124855925323654
>> 631, -8175270408138820500, -8271197636596881168, -8336685710406477123,
>> -8466220397076441627, -8534337908154758270, -8550484400487603561,
>> -862246738021989870, -8727219287242892
>> 185, -8895705475282612927, -8921801772904834063, -9057266752652143883,
>> -9059183540698454288, -9067986437682229598, -9148183367896132028,
>> -962208188860606543, 10859447725819218
>> 30, 1189775396643491793, 1253728955879686947, 1389982523380382228,
>> 1429632314664544045, 143610053770130548, 150118120072602242,
>> 1575692041584712198, 1624575905722628764, 17894
>> 76212785155173, 1995296121962835019, 2041217364870030239,
>> 2120277336231792146, 2124445736743406711, 2154979704292433983,
>> 2340726755918680765, 23481654796845972, 23620268084352
>> 24407, 2366144489007464626, 2381492708106933027, 2398868971489617398,
>> 2427315953339163528, 2433999003913998534, 2633074510238705620,
>> 266659839023809792, 2677817641360639089, 2
>> 719725410894526151, 2751925111749406683, 2815703589803785617,
>> 3041515796379693113, 3044903149214270978, 3094954503756703989,
>> 3243933267690865263, 3246086646486800371, 33270068
>>

Re: node join ant decommission exception

2015-06-29 Thread
the Cluster info is :
Cluster Information:
Name: Status Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
85f8632f-5c43-3343-a73e-cef935a186ab: [172.19.105.58, 172.19.105.56,
172.19.105.57, 172.19.105.54, 172.19.105.55, 172.19.105.52, 172.19.105.53,
172.19.105.50, 172.19.105.16, 172.19.105.51, 172.19.105.48, 172.19.105.49,
172.19.105.19, 172.19.105.13, 172.19.105.72, 172.19.105.12, 172.19.105.15,
172.19.105.14, 172.19.105.9, 172.19.105.11, 172.19.105.10, 172.19.105.39,
172.19.105.71, 172.19.105.70]

--
Ranger Tsao

2015-06-29 21:33 GMT+08:00 曹志富 cao.zh...@gmail.com:

 error when node join cluster:

 WARN  13:30:18 UnknownColumnFamilyException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
 cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:188)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:170)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 WARN  13:30:18 UnknownColumnFamilyException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
 cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:188)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:170)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88)
 ~[apache-cassandra-2.1.6.jar:2.1.6]

 --
 Ranger Tsao

 2015-06-29 18:01 GMT+08:00 曹志富 cao.zh...@gmail.com:

 Hi guys:
  Today I add a node to my C* Cluster (2.1.6) ,this node (a seeds
 node) I had remove from my C* Cluster before(to change some hadrware). when
 I add this node ti C* Cluster,there are some excpetion

 WARN  [MessagingService-Incoming-/172.19.105.10] 2015-06-29 17:27:33,443
 IncomingTcpConnection.java:97 - UnknownColumnFamilyException reading from
 socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
 cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation

Re: node join ant decommission exception

2015-06-29 Thread
error when node join cluster:

WARN  13:30:18 UnknownColumnFamilyException reading from socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
at
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
~[apache-cassandra-2.1.6.jar:2.1.6]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:188)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:170)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88)
~[apache-cassandra-2.1.6.jar:2.1.6]
WARN  13:30:18 UnknownColumnFamilyException reading from socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
at
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
~[apache-cassandra-2.1.6.jar:2.1.6]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:188)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:170)
~[apache-cassandra-2.1.6.jar:2.1.6]
at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88)
~[apache-cassandra-2.1.6.jar:2.1.6]

--
Ranger Tsao

2015-06-29 18:01 GMT+08:00 曹志富 cao.zh...@gmail.com:

 Hi guys:
  Today I add a node to my C* Cluster (2.1.6) ,this node (a seeds
 node) I had remove from my C* Cluster before(to change some hadrware). when
 I add this node ti C* Cluster,there are some excpetion

 WARN  [MessagingService-Incoming-/172.19.105.10] 2015-06-29 17:27:33,443
 IncomingTcpConnection.java:97 - UnknownColumnFamilyException reading from
 socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
 cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:188)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:170)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
 at
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88)
 ~[apache-cassandra-2.1.6.jar:2.1.6]

 and I decommission this node also some exception

 578156589587, 9081960145929333737, 9082815961778382494,
 9095369564583896551, 9101513298647606725

Range not found after nodetool decommission

2015-06-24 Thread
ERROR [OptionalTasks:1] 2015-06-25 08:56:19,156 CassandraDaemon.java:223 -
Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.AssertionError: -110036444293069784 not found in
--
Ranger Tsao


Re: system-hints compaction all the time

2015-06-22 Thread
this is unusual.

--
Ranger Tsao

2015-06-22 16:06 GMT+08:00 Jason Wee peich...@gmail.com:

 what's your questions?

 On Mon, Jun 22, 2015 at 12:05 AM, 曹志富 cao.zh...@gmail.com wrote:

 the logger like this :


 INFO  [CompactionExecutor:501] 2015-06-21 21:42:36,306
 CompactionTask.java:140 - Compacting
 [SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints/system-hints-ka-365-Data.db')]
 INFO  [CompactionExecutor:501] 2015-06-21 21:42:37,782
 CompactionTask.java:270 - Compacted 1 sstables to
 [bin/../data/data/system/hints/system-hints-ka-366,].  18,710,207 bytes to
 18,710,207 (~100% of original) in 1,476ms = 12.089054MB/s.  11 total
 partitions merged to 11.  Partition merge counts were {1:11, }
 INFO  [CompactionExecutor:502] 2015-06-21 21:52:37,784
 CompactionTask.java:140 - Compacting
 [SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints/system-hints-ka-366-Data.db')]
 INFO  [CompactionExecutor:502] 2015-06-21 21:52:39,223
 CompactionTask.java:270 - Compacted 1 sstables to
 [bin/../data/data/system/hints/system-hints-ka-367,].  18,710,207 bytes to
 18,710,207 (~100% of original) in 1,438ms = 12.408515MB/s.  11 total
 partitions merged to 11.  Partition merge counts were {1:11, }
 INFO  [CompactionExecutor:503] 2015-06-21 22:02:39,224
 CompactionTask.java:140 - Compacting
 [SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints
 /system-hints-ka-367-Data.db')]
 INFO  [CompactionExecutor:503] 2015-06-21 22:02:40,742
 CompactionTask.java:270 - Compacted 1 sstables to
 [bin/../data/data/system/hints/system-hints-ka-368,].  18,710,207 byte
 s to 18,710,207 (~100% of original) in 1,517ms = 11.762323MB/s.  11 total
 partitions merged to 11.  Partition merge counts were {1:11, }
 INFO  [CompactionExecutor:504] 2015-06-21 22:12:40,743
 CompactionTask.java:140 - Compacting
 [SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints
 /system-hints-ka-368-Data.db')]
 INFO  [CompactionExecutor:504] 2015-06-21 22:12:42,262
 CompactionTask.java:270 - Compacted 1 sstables to
 [bin/../data/data/system/hints/system-hints-ka-369,].  18,710,207 byte
 s to 18,710,207 (~100% of original) in 1,518ms = 11.754574MB/s.  11 total
 partitions merged to 11.  Partition merge counts were {1:11, }
 INFO  [CompactionExecutor:505] 2015-06-21 22:22:42,264
 CompactionTask.java:140 - Compacting
 [SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints
 /system-hints-ka-369-Data.db')]
 INFO  [CompactionExecutor:505] 2015-06-21 22:22:43,750
 CompactionTask.java:270 - Compacted 1 sstables to
 [bin/../data/data/system/hints/system-hints-ka-370,].  18,710,207 byte
 s to 18,710,207 (~100% of original) in 1,486ms = 12.007701MB/s.  11 total
 partitions merged to 11.  Partition merge counts were {1:11, }

 C* 2.1.6

 --
 Ranger Tsao





system-hints compaction all the time

2015-06-21 Thread
the logger like this :


INFO  [CompactionExecutor:501] 2015-06-21 21:42:36,306
CompactionTask.java:140 - Compacting
[SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints/system-hints-ka-365-Data.db')]
INFO  [CompactionExecutor:501] 2015-06-21 21:42:37,782
CompactionTask.java:270 - Compacted 1 sstables to
[bin/../data/data/system/hints/system-hints-ka-366,].  18,710,207 bytes to
18,710,207 (~100% of original) in 1,476ms = 12.089054MB/s.  11 total
partitions merged to 11.  Partition merge counts were {1:11, }
INFO  [CompactionExecutor:502] 2015-06-21 21:52:37,784
CompactionTask.java:140 - Compacting
[SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints/system-hints-ka-366-Data.db')]
INFO  [CompactionExecutor:502] 2015-06-21 21:52:39,223
CompactionTask.java:270 - Compacted 1 sstables to
[bin/../data/data/system/hints/system-hints-ka-367,].  18,710,207 bytes to
18,710,207 (~100% of original) in 1,438ms = 12.408515MB/s.  11 total
partitions merged to 11.  Partition merge counts were {1:11, }
INFO  [CompactionExecutor:503] 2015-06-21 22:02:39,224
CompactionTask.java:140 - Compacting
[SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints
/system-hints-ka-367-Data.db')]
INFO  [CompactionExecutor:503] 2015-06-21 22:02:40,742
CompactionTask.java:270 - Compacted 1 sstables to
[bin/../data/data/system/hints/system-hints-ka-368,].  18,710,207 byte
s to 18,710,207 (~100% of original) in 1,517ms = 11.762323MB/s.  11 total
partitions merged to 11.  Partition merge counts were {1:11, }
INFO  [CompactionExecutor:504] 2015-06-21 22:12:40,743
CompactionTask.java:140 - Compacting
[SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints
/system-hints-ka-368-Data.db')]
INFO  [CompactionExecutor:504] 2015-06-21 22:12:42,262
CompactionTask.java:270 - Compacted 1 sstables to
[bin/../data/data/system/hints/system-hints-ka-369,].  18,710,207 byte
s to 18,710,207 (~100% of original) in 1,518ms = 11.754574MB/s.  11 total
partitions merged to 11.  Partition merge counts were {1:11, }
INFO  [CompactionExecutor:505] 2015-06-21 22:22:42,264
CompactionTask.java:140 - Compacting
[SSTableReader(path='/home/ant/apache-cassandra-2.1.6/bin/../data/data/system/hints
/system-hints-ka-369-Data.db')]
INFO  [CompactionExecutor:505] 2015-06-21 22:22:43,750
CompactionTask.java:270 - Compacted 1 sstables to
[bin/../data/data/system/hints/system-hints-ka-370,].  18,710,207 byte
s to 18,710,207 (~100% of original) in 1,486ms = 12.007701MB/s.  11 total
partitions merged to 11.  Partition merge counts were {1:11, }

C* 2.1.6

--
Ranger Tsao


exception when anti-entropy repair

2015-06-20 Thread
ERROR [RepairJobTask:3] 2015-06-21 00:18:02,187 RepairJob.java:145 - Error
occurred during snapshot phase

java.lang.RuntimeException: Could not create snapshot at /172.19.104.107

at
org.apache.cassandra.repair.SnapshotTask$SnapshotCallback.onFailure(SnapshotTask.java:78)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
org.apache.cassandra.net.MessagingService$5$1.run(MessagingService.java:350)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
~[na:1.7.0_71]

at java.util.concurrent.FutureTask.run(FutureTask.java:262)
~[na:1.7.0_71]

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_71]

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_71]

at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]

ERROR [AntiEntropySessions:1024] 2015-06-21 00:18:02,193
RepairSession.java:303 - [repair #8996edd0-175f-11e5-97db-172be67ae925]
session completed with the following error

java.io.IOException: Failed during snapshot creation.

at
org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:146)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
~[guava-16.0.jar:na]

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_71]

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_71]

at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]

INFO  [AntiEntropySessions:1028] 2015-06-21 00:18:02,193
RepairSession.java:260 - [repair #ec84e1b0-1767-11e5-97db-172be67ae925] new
session: will sync cbase127/172.19.104.117, /172.19.104.109, /172.19.104.116
on range (-6705672878737501889,-6700727306321494079] for
mention.[graphindex_lock_, txlog, edgestore, titan_ids, graphindex,
system_properties, edgestore_lock_, system_properties_lock_, systemlog]

ERROR [AntiEntropySessions:1024] 2015-06-21 00:18:02,193
CassandraDaemon.java:223 - Exception in thread
Thread[AntiEntropySessions:1024,5,RMI Runtime]

java.lang.RuntimeException: java.io.IOException: Failed during snapshot
creation.

at com.google.common.base.Throwables.propagate(Throwables.java:160)
~[guava-16.0.jar:na]

at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
~[na:1.7.0_71]

at java.util.concurrent.FutureTask.run(FutureTask.java:262)
~[na:1.7.0_71]

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
~[na:1.7.0_71]

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_71]

at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]

Caused by: java.io.IOException: Failed during snapshot creation.

at
org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:146)
~[apache-cassandra-2.1.6.jar:2.1.6]

at
com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
~[guava-16.0.jar:na]

... 3 common frames omitted

C* 2.1.6 default config

--
Ranger Tsao


cfstats ERROR

2015-06-20 Thread
error:
/home/ant/apache-cassandra-2.1.6/bin/../data/data/blogger/edgestore/blogger-edgestore-tmplink-ka-146100-Data.db
-- StackTrace --
java.lang.AssertionError:
/home/ant/apache-cassandra-2.1.6/bin/../data/data/blogger/edgestore/blogger-edgestore-tmplink-ka-146100-Data.db
at
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
at
org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
at
org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
at
com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)

vnodes,LCS

--
Ranger Tsao


abnormal log after remove a node

2015-06-19 Thread
I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node from
this cluster using nodetool decommission.

But tody I find some log like this:

INFO  [GossipStage:1] 2015-06-19 17:38:05,616 Gossiper.java:968 -
InetAddress /172.19.105.41 is now DOWN
INFO  [GossipStage:1] 2015-06-19 17:38:05,617 StorageService.java:1885 -
Removing tokens [-1014432261309809702, -1055322450438958612,
-1120728727235087395, -1191392141261832305, -1203676771883970142,
-1215563040745505837, -1215648909329054362, -1269531760567530381,
-1278047879489577908, -1313427877031136549, -1342822572958042617,
-1350792764922315814, -1383390744017639599, -139000372807970456,
-140827955201469664, -1631551789771606023, -1633789813430312609,
-1795528665156349205, -1836619444785023397, -1879127294549041822,
-1962337787208890426, -2022309807234530256, -2033402140526360327,
-2089413865145942100, -210961549458416802, -2148530352195763113,
-2184481573787758786, -610790268720205, -2340762266634834427,
-2513416003567685694, -2520971378752190013, -2596695976621541808,
-2620636796023437199, -2640378596436678113, -2679143017361311011,
-2721176590519112233, -2749213392354746126, -279267896827516626,
-2872377759991294853, -2904711688111888325, -290489381926812623,
-3000574339499272616, -301428600802598523, -3019280155316984595,
-3024451041907074275, -3056898917375012425, -3161300347260716852,
-3166392383659271772, -3327634380871627036, -3530685865340274372,
-3563112657791369745, -366930313427781469, -3729582520450700795,
-3901838244986519991, -4065326606010524312, -4174346928341550117,
-4184239233207315432, -4204369933734181327, -4206479093137814808,
-421410317165821100, -4311166118017934135, -4407123461118340117,
-4466364858622123151, -4466939645485100087, -448955147512581975,
-4587780638857304626, -4649897584350376674, -4674234125365755024
, -4833801201210885896, -4857586579802212277, -4868896650650107463,
-4980063310159547694, -4983471821416248610, -4992846054037653676,
-5026994389965137674, -514302500353679181
0, -5198414516309928594, -5245363745777287346, -5346838390293957674,
-5374413419545696184, -5427881744040857637, -5453876964430787287,
-5491923669475601173, -55219734138599212
6, -5523011502670737422, -5537121117160410549, -5557015938925208697,
-5572489682738121748, -5745899409803353484, -5771239101488682535,
-5893479791287484099, -59766730414807540
44, -6014643892406938367, -6086002438656595783, -6129360679394503700,
-6224240257573911174, -6290393495130499466, -6378712056928268929,
-6430306056990093461, -6800188263839065
013, -6912720411187525051, -7160327814305587432, -7175004328733776324,
-7272070430660252577, -7307945744786025148, -742448651973108101,
-7539255117639002578, -7657460716997978
94, -7846698077070579798, -7870621904906244395, -7900841391761900719,
-7918145426423910061, -7936795453892692473, -8070255024778921411,
-8086888710627677669, -8124855925323654
631, -8175270408138820500, -8271197636596881168, -8336685710406477123,
-8466220397076441627, -8534337908154758270, -8550484400487603561,
-862246738021989870, -8727219287242892
185, -8895705475282612927, -8921801772904834063, -9057266752652143883,
-9059183540698454288, -9067986437682229598, -9148183367896132028,
-962208188860606543, 10859447725819218
30, 1189775396643491793, 1253728955879686947, 1389982523380382228,
1429632314664544045, 143610053770130548, 150118120072602242,
1575692041584712198, 1624575905722628764, 17894
76212785155173, 1995296121962835019, 2041217364870030239,
2120277336231792146, 2124445736743406711, 2154979704292433983,
2340726755918680765, 23481654796845972, 23620268084352
24407, 2366144489007464626, 2381492708106933027, 2398868971489617398,
2427315953339163528, 2433999003913998534, 2633074510238705620,
266659839023809792, 2677817641360639089, 2
719725410894526151, 2751925111749406683, 2815703589803785617,
3041515796379693113, 3044903149214270978, 3094954503756703989,
3243933267690865263, 3246086646486800371, 33270068
97333869434, 3393657685587750192, 3395065499228709345, 3426126123948029459,
3500469615600510698, 3644011364716880512, 3693249207133187620,
3776164494954636918, 38780676797
8035, 3872151295451662867, 3937077827707223414, 4041082935346014761,
4060208918173638435, 4086747843759164940, 4165638694482690057,
4203996339238989224, 4220155275330961826, 4
366784953339236686, 4390116924352514616, 4391225331964772681,
4392419346255765958, 4448400054980766409, 4463335839328115373,
4547306976104362915, 4588174843388248100, 48438580
67983993745, 4912719175808770608, 499628843707992459, 5004392861473086088,
5021047773702107258, 510226752691159107, 5109551630357971118,
5157669927051121583, 51627694176199618
24, 5238710860488961530, 5245958115092331518, 5302459768185143407,
5373077323749320571, 5445982956737768774, 5526076427753104565,
5531878975169972758, 5590672474842108747, 561
8238086143944892, 5645763748154253201, 5648082473497629258,
5799608283794045232, 5968931466409317704, 6080339666926312644,
6222992739052178144, 6329332485451402638, 

seems a doc error

2015-06-10 Thread
In ths doc :Enabling JMX authentication
http://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureJmxAuthentication.html

In the chapter :

$ nodetool status -u cassandra -pw cassandra

should change to

 $ nodetool -u cassandra -pw cassandra status

--
Ranger Tsao


auto clear data with ttl

2015-06-08 Thread
I have C* 2.1.5,store some data with ttl.Reduce the gc_grace_seconds to
zero.

But it seems has no effect.

Did I miss something?
--
Ranger Tsao


Re: auto clear data with ttl

2015-06-08 Thread
Thank You. I have change unchecked_tombstone_compaction to true . Major
compaction will cause a big sstable ,I think is a lot good choice

--
Ranger Tsao

2015-06-09 11:16 GMT+08:00 Aiman Parvaiz ai...@flipagram.com:

 So gc_grace zero will remove tombstones without any delay after
 compaction. So it's possible that tombstones containing SSTs still need to
 be compacted. So either you can wait for compaction to happen or do a
 manual compaction depending on your compaction strategy. Manual compaction
 does have some drawbacks so please read about it.

 Sent from my iPhone

 On Jun 8, 2015, at 7:26 PM, 曹志富 cao.zh...@gmail.com wrote:

 I have C* 2.1.5,store some data with ttl.Reduce the gc_grace_seconds to
 zero.

 But it seems has no effect.

 Did I miss something?
 --
 Ranger Tsao




Re: what this error mean

2015-05-29 Thread
seems the issue can clause  brain split

--
Ranger Tsao

2015-05-29 14:57 GMT+08:00 Jason Wee peich...@gmail.com:

 why it happened? from the code, it looks like this condition is not null
 https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/io/sstable/SSTableReader.java#L921

 or you can quickly fix this by upgrading to 2.1.5, i noticed there is code
 change for this class
 https://github.com/apache/cassandra/blob/cassandra-2.1.5/src/java/org/apache/cassandra/io/sstable/SSTableReader.java#L921


 hth

 jason



 On Fri, May 29, 2015 at 9:39 AM, 曹志富 cao.zh...@gmail.com wrote:

 I have a 25 noedes C* cluster with C* 2.1.3. These days a node occur
 split brain many times。

 check the log I found this:

 INFO  [MemtableFlushWriter:118] 2015-05-29 08:07:39,176
 Memtable.java:378 - Completed flushing
 /home/ant/apache-cassandra-2.1.3/bin/../data/data/system/sstable_activity-5a1ff2
 67ace03f128563cfae6103c65e/system-sstable_activity-ka-4371-Data.db (8187
 bytes) for commitlog position ReplayPosition(segmentId=1432775133526,
 position=16684949)
 ERROR [IndexSummaryManager:1] 2015-05-29 08:10:30,209
 CassandraDaemon.java:167 - Exception in thread
 Thread[IndexSummaryManager:1,1,main]
 java.lang.AssertionError: null
 at
 org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewSummarySamplingLevel(SSTableReader.java:921)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
 at
 org.apache.cassandra.io.sstable.IndexSummaryManager.adjustSamplingLevels(IndexSummaryManager.java:410)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
 at
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:288)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
 at
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:238)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
 at
 org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:139)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
 at
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:82)
 ~[apache-cassandra-2.
 1.3.jar:2.1.3]
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 [na:1.7.0_71]
 at
 java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
 [na:1.7.0_71]
 at
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
 [na:1.7.0_71]
 at
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [na:1.7.0_71]
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 [na:1.7.0_71]
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]

 I  want to know why this and how to fix this

 Thanks all
 --
 Ranger Tsao





what this error mean

2015-05-28 Thread
I have a 25 noedes C* cluster with C* 2.1.3. These days a node occur split
brain many times。

check the log I found this:

INFO  [MemtableFlushWriter:118] 2015-05-29 08:07:39,176
Memtable.java:378 - Completed flushing
/home/ant/apache-cassandra-2.1.3/bin/../data/data/system/sstable_activity-5a1ff2
67ace03f128563cfae6103c65e/system-sstable_activity-ka-4371-Data.db (8187
bytes) for commitlog position ReplayPosition(segmentId=1432775133526,
position=16684949)
ERROR [IndexSummaryManager:1] 2015-05-29 08:10:30,209
CassandraDaemon.java:167 - Exception in thread
Thread[IndexSummaryManager:1,1,main]
java.lang.AssertionError: null
at
org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewSummarySamplingLevel(SSTableReader.java:921)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.io.sstable.IndexSummaryManager.adjustSamplingLevels(IndexSummaryManager.java:410)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:288)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:238)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:139)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:82)
~[apache-cassandra-2.
1.3.jar:2.1.3]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
[na:1.7.0_71]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
[na:1.7.0_71]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
[na:1.7.0_71]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[na:1.7.0_71]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_71]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]

I  want to know why this and how to fix this

Thanks all
--
Ranger Tsao


java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code

2015-05-03 Thread
Hi guys:

I havle C* 2.1.3 cluster,25 nodes ,running in JDK_1.7.0_71, CentOS
2.6.32-220.el6.x86_64,4 Core,32GB RAM.

Today one of the nodes,has some error like this:

java.lang.InternalError: a fault occurred in a recent unsafe memory access
operation in compiled Java code
at
org.apache.cassandra.io.util.AbstractDataInput.readUnsignedShort(AbstractDataInput.java:312)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:317)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:327)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1425)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:53)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:129)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
~[apache-cassandra-2.1.3.jar:2.1.3]
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:53)
~[apache-cassandra-2.1.3.jar:2.1.3]
at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47)
~[apache-cassandra-2.1.3.jar:2.1.3]
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
~[apache-cassandra-2.1.3.jar:2.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
~[na:1.7.0_71]
at
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
~[apache-cassandra-2.1.3.jar:2.1.3]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-2.1.3.jar:2.1.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]

I have foud an issue CASSANDRA-5737
https://issues.apache.org/jira/browse/CASSANDRA-5737.

So I want to ask what can i do with this error?

Thank u all!!!

--
Ranger Tsao


Re: how to clear data from disk

2015-03-09 Thread
nodetool clearsnapshot

--
Ranger Tsao

2015-03-10 10:47 GMT+08:00 鄢来琼 laiqiong@gtafe.com:

  Hi ALL,



 After drop table, I found the data is not removed from disk, I should
 reduce the gc_grace_seconds before the drop operation.

 I have to wait for 10 days, but there is not enough disk.

 Could you tell me there is method to clear the data from disk quickly?

 Thank you very much!



 Peter



C* 2.0.9 Compaction Error

2015-03-09 Thread
Hi,every one:

I have a 12 nodes C* 2.0.9 cluster for titan.I found some error when doing
compaction,the exception stack:

java.lang.AssertionError: Added column does not sort as the last column

at
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)

at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:116)

at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:150)

at
org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)

at
org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)

at
org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)

at
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)

at
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)

at
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)

at
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)

at
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)

at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)

at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)

at
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:154)

at
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)

at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

at
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)

at
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)

at
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:744)


I found en issue CASSANDRA-7470
https://issues.apache.org/jira/browse/CASSANDRA-7470 ,but it's about CQL.

So why this Error?


--
Ranger Tsao


Run witch repair cmd when increase replication factor

2015-03-06 Thread
I want fo increase replication factor in my C* 2.1.3 cluster(rf chang from
2 to 3 for some keyspaces).

I read the doc of Updating the replication factor
http://www.datastax.com/documentation/cql/3.1/cql/cql_using/update_ks_rf_t.html
.
The step two is run the nodetool repair.But as I know nodetool repair
default is full repair,seems againt to the step three.So run witch repair
cmd when increase replication factor?

Thanks all.

--
Ranger Tsao


Re: Input/Output Error

2015-03-04 Thread
thanks!

--
Ranger Tsao

2015-03-05 3:40 GMT+08:00 Jens Rantil jens.ran...@tink.se:

 Hi,

 Check your Cassandra and kernel (if on Linux) log files for errors.

 Cheers,
 Jens

 –
 Skickat från Mailbox https://www.dropbox.com/mailbox


 On Wed, Mar 4, 2015 at 2:18 AM, 曹志富 cao.zh...@gmail.com wrote:

 Some times My C* 2.1.3 cluster compaction or streaming occur this error
 ,do this because of disk or filesystem problem??

 Thanks All.

  --
 Ranger Tsao





Input/Output Error

2015-03-03 Thread
Some times My C* 2.1.3 cluster compaction or streaming occur this error ,do
this because of disk or filesystem problem??

Thanks All.

--
Ranger Tsao


Why nodetool netstats show this

2015-02-21 Thread
As I ask befor ,I hava 20 nodes C* cluster,base on cassandra 2.1.2 ,using
vnodes.

When I run nodetool repair for Anti Entropy Repair,then run nodetool
netstats,all the nodes show this msg

Mode: NORMAL
Unbootstrap cfe03590-b02a-11e4-95c5-b5f6ad9c7711
/172.19.105.49

node 172.19.105.49 is the lastest joined in my cluster,his status is Up
Normal.

I want to know why this message?Shoul I remove this node?
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/


Re: can't delete tmp file

2015-02-19 Thread
Thanks you Roland

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-02-19 20:32 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de:

 Hi,

 try 2.1.3 - with 2.1.2 this is normal. From the changelog:

 * Make sure we don't add tmplink files to the compaction strategy
 (CASSANDRA-8580)
 * Remove tmplink files for offline compactions (CASSANDRA-8321)

 In most cases they are safe to delete, I did this when the node was down.

 Cheers,
 Roland



Node joining take a long time

2015-02-19 Thread
Hi guys:
I have a 20 nodes C* cluster with vnodes,version is 2.1.2. When I add a
node to my cluster,it take a long time ,and somes exists node nodetool
nestats show this:

Mode: NORMAL
Unbootstrap cfe03590-b02a-11e4-95c5-b5f6ad9c7711
/172.19.105.49
Receiving 68 files, 23309801005 bytes total

I want know ,is there some problem with my cluster?
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/


Re: can't delete tmp file

2015-02-19 Thread
Just upgrade my cluster to 2.1.3???

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-02-19 20:32 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de:

 Hi,

 try 2.1.3 - with 2.1.2 this is normal. From the changelog:

 * Make sure we don't add tmplink files to the compaction strategy
 (CASSANDRA-8580)
 * Remove tmplink files for offline compactions (CASSANDRA-8321)

 In most cases they are safe to delete, I did this when the node was down.

 Cheers,
 Roland



Re: Node joining take a long time

2015-02-19 Thread
So ,what can I do???Waiting for 2.1.4 or upgrade to 2.1.3??

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-02-20 3:16 GMT+08:00 Robert Coli rc...@eventbrite.com:

 On Thu, Feb 19, 2015 at 7:34 AM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 I'm sure Rob will be along shortly to say that 2.1.2 is, in his opinion,
 broken for production use...an opinion I'd agree with. So bare that in mind
 if you are running a production cluster.


 If you speak of the devil, he will appear.

 But yes, really, run 2.1.1 or 2.1.3, 2.1.2 is a bummer. Don't take the
 brown 2.1.2.

 This commentary is likely unrelated to the problem the OP is having, which
 I would need the information Mark asked for to comment on. :)

 =Rob




Re: Node joining take a long time

2015-02-19 Thread
First thank all of you.

Almost three days,till right now the status is still Joining. My cluster
per 650G a node.

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-02-20 3:16 GMT+08:00 Robert Coli rc...@eventbrite.com:

 On Thu, Feb 19, 2015 at 7:34 AM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 I'm sure Rob will be along shortly to say that 2.1.2 is, in his opinion,
 broken for production use...an opinion I'd agree with. So bare that in mind
 if you are running a production cluster.


 If you speak of the devil, he will appear.

 But yes, really, run 2.1.1 or 2.1.3, 2.1.2 is a bummer. Don't take the
 brown 2.1.2.

 This commentary is likely unrelated to the problem the OP is having, which
 I would need the information Mark asked for to comment on. :)

 =Rob




Re: How to deal with too many sstables

2015-02-02 Thread
Just run nodetool repair.

The nodes witch has many sstables are newest in my cluster.Before add these
nodes to my cluster ,my cluster have not compaction automaticly because my
cluster is an only write cluster.

thanks.

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-02-03 12:16 GMT+08:00 Flavien Charlon flavien.char...@gmail.com:

 Did you run incremental repair? Incremental repair is broken in 2.1 and
 tends to create way too many SSTables.

 On 2 February 2015 at 18:05, 曹志富 cao.zh...@gmail.com wrote:

 Hi,all:
 I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
 40,000+ sstables.

 my compaction strategy is STCS.

 Could someone give me some solution to deal with this situation.

 Thanks.
 --
 曹志富
 手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/





Re: How to deal with too many sstables

2015-02-02 Thread
You are right.I have already change cold_reads_to_omit to 0.0.

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-02-03 14:15 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de:

  Hi,

 maybe you are running into an issue that I also had on my test cluster.
 Since there were almost no reads on it cassandra did not run any minor
 compactions at all. Solution for me (in this case) was:

 ALTER TABLE tablename WITH compaction = {'class':
 'SizeTieredCompactionStrategy', 'min_threshold': '4', 'max_threshold':
 '32', 'cold_reads_to_omit': 0.0};
 where cold_reads_to_omit is the trick.

 Anyway as Eric and Marcus among others suggest, do not run 2.1.2 for
 production as it has many issues. I'm looking forward to test 2.1.3 when it
 arrives.

 Cheers,
 Roland


 Am 03.02.2015 um 03:05 schrieb 曹志富:

  Hi,all:
 I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
 40,000+ sstables.

  my compaction strategy is STCS.

  Could someone give me some solution to deal with this situation.

  Thanks.
  --
 曹志富
 手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/





How to deal with too many sstables

2015-02-02 Thread
Hi,all:
I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
40,000+ sstables.

my compaction strategy is STCS.

Could someone give me some solution to deal with this situation.

Thanks.
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/


Re: SStables can't compat automaticly

2015-01-26 Thread
Your are right,currently My cluster only write ,when My cluster build after
two month ,I will change to the default thresholds.

Thanks for your reply.

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-01-26 22:40 GMT+08:00 Eric Stevens migh...@gmail.com:

 If you are doing only writes and no reads, then 'cold_reads_to_omit' is
 probably preventing your cluster from crossing a threshold where it decides
 it needs to engage in compaction.  Setting it to 0.0 should fix this, but
 remember that you tuned it as you should be able to revert it to default
 thresholds once you start engaging in reads.

 On Mon, Jan 26, 2015 at 3:20 AM, 曹志富 cao.zh...@gmail.com wrote:

 I read the CQL table properties again.This property could contorl the
 compaction . Right now My C* cluster only write without any read.

 --
 曹志富
 手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/

 2015-01-26 17:41 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de
 :

  Hi,

 are you running 2.1.2 evenutally? I had this problem recently and there
 were two topics here about this. Problem was, that my test cluster had
 almost no reads and did not compact sstables.

 Reason for me was that those minor compactions did not get triggered
 since there were almost no reads on that tables. Setting
 'cold_reads_to_omit' to 0 did the job for me:

 ALTER TABLE tablename WITH compaction = {'class':
 'SizeTieredCompactionStrategy', 'min_threshold': '4', 'max_threshold':
 '32', 'cold_reads_to_omit': 0.0};

 Credits to Tyler and Eric for the pointers.


 Maybe that could help.

 Cheers,
 Roland

 Am 26.01.2015 um 09:56 schrieb 曹志富:

 No,to confirm this I have run the command all my nodes:bin/nodetool
 enableautocompaction

  --
 曹志富
  手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/

 2015-01-26 16:49 GMT+08:00 Jason Wee peich...@gmail.com:

 Did you disable auto compaction through nodetool?

 disableautocompactionDisable autocompaction for the given
 keyspace and column family

  Jason

 On Mon, Jan 26, 2015 at 11:34 AM, 曹志富 cao.zh...@gmail.com wrote:

  Hi everybody:

  I have 18 nodes using cassandra2.1.2.Every node has 4 core, 32 GB
 RAM, 2T hard disk,OS is CentOS release 6.2 (Final).

  I have follow the Recommended production settings to config my
 system.such as disable SWAP,unlimited mem lock...

  My heap size is:

  MAX_HEAP_SIZE=8G
 MIN_HEAP_SIZE=8G
 HEAP_NEWSIZE=2G

   I use STCS,other config using default,using Datastax Java Driver
 2.1.2. BatchStatment 100key commit per time.

  When I run my cluster and insert data from kafka (1 keys/s)
 after 2 days,every node can't compact  some there too many sstables.

  I try to use major compact to compact the sstables , it cost a long
 long time .Also the new sstables can't compat automatic.


  I trace the log , the CMS GC too often,almost 30 minute onetime.

  Could someone help me to solve this problem.


  --
  曹志富
  手 机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/









Re: SStables can't compat automaticly

2015-01-26 Thread
I read the CQL table properties again.This property could contorl the
compaction . Right now My C* cluster only write without any read.

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-01-26 17:41 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de:

  Hi,

 are you running 2.1.2 evenutally? I had this problem recently and there
 were two topics here about this. Problem was, that my test cluster had
 almost no reads and did not compact sstables.

 Reason for me was that those minor compactions did not get triggered since
 there were almost no reads on that tables. Setting 'cold_reads_to_omit' to
 0 did the job for me:

 ALTER TABLE tablename WITH compaction = {'class':
 'SizeTieredCompactionStrategy', 'min_threshold': '4', 'max_threshold':
 '32', 'cold_reads_to_omit': 0.0};

 Credits to Tyler and Eric for the pointers.


 Maybe that could help.

 Cheers,
 Roland

 Am 26.01.2015 um 09:56 schrieb 曹志富:

 No,to confirm this I have run the command all my nodes:bin/nodetool
 enableautocompaction

  --
 曹志富
  手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/

 2015-01-26 16:49 GMT+08:00 Jason Wee peich...@gmail.com:

 Did you disable auto compaction through nodetool?

 disableautocompactionDisable autocompaction for the given
 keyspace and column family

  Jason

 On Mon, Jan 26, 2015 at 11:34 AM, 曹志富 cao.zh...@gmail.com wrote:

  Hi everybody:

  I have 18 nodes using cassandra2.1.2.Every node has 4 core, 32 GB RAM,
 2T hard disk,OS is CentOS release 6.2 (Final).

  I have follow the Recommended production settings to config my
 system.such as disable SWAP,unlimited mem lock...

  My heap size is:

  MAX_HEAP_SIZE=8G
 MIN_HEAP_SIZE=8G
 HEAP_NEWSIZE=2G

   I use STCS,other config using default,using Datastax Java Driver
 2.1.2. BatchStatment 100key commit per time.

  When I run my cluster and insert data from kafka (1 keys/s) after
 2 days,every node can't compact  some there too many sstables.

  I try to use major compact to compact the sstables , it cost a long
 long time .Also the new sstables can't compat automatic.


  I trace the log , the CMS GC too often,almost 30 minute onetime.

  Could someone help me to solve this problem.


  --
  曹志富
  手 机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/







Re: SStables can't compat automaticly

2015-01-26 Thread
Yes I use cassandra 2.1.2 JDK is 1.7.0_71. I will try your solution.

Thank you Roland Etzenhammer!!!

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-01-26 17:41 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de:

  Hi,

 are you running 2.1.2 evenutally? I had this problem recently and there
 were two topics here about this. Problem was, that my test cluster had
 almost no reads and did not compact sstables.

 Reason for me was that those minor compactions did not get triggered since
 there were almost no reads on that tables. Setting 'cold_reads_to_omit' to
 0 did the job for me:

 ALTER TABLE tablename WITH compaction = {'class':
 'SizeTieredCompactionStrategy', 'min_threshold': '4', 'max_threshold':
 '32', 'cold_reads_to_omit': 0.0};

 Credits to Tyler and Eric for the pointers.


 Maybe that could help.

 Cheers,
 Roland

 Am 26.01.2015 um 09:56 schrieb 曹志富:

 No,to confirm this I have run the command all my nodes:bin/nodetool
 enableautocompaction

  --
 曹志富
  手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/

 2015-01-26 16:49 GMT+08:00 Jason Wee peich...@gmail.com:

 Did you disable auto compaction through nodetool?

 disableautocompactionDisable autocompaction for the given
 keyspace and column family

  Jason

 On Mon, Jan 26, 2015 at 11:34 AM, 曹志富 cao.zh...@gmail.com wrote:

  Hi everybody:

  I have 18 nodes using cassandra2.1.2.Every node has 4 core, 32 GB RAM,
 2T hard disk,OS is CentOS release 6.2 (Final).

  I have follow the Recommended production settings to config my
 system.such as disable SWAP,unlimited mem lock...

  My heap size is:

  MAX_HEAP_SIZE=8G
 MIN_HEAP_SIZE=8G
 HEAP_NEWSIZE=2G

   I use STCS,other config using default,using Datastax Java Driver
 2.1.2. BatchStatment 100key commit per time.

  When I run my cluster and insert data from kafka (1 keys/s) after
 2 days,every node can't compact  some there too many sstables.

  I try to use major compact to compact the sstables , it cost a long
 long time .Also the new sstables can't compat automatic.


  I trace the log , the CMS GC too often,almost 30 minute onetime.

  Could someone help me to solve this problem.


  --
  曹志富
  手 机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/







Re: SStables can't compat automaticly

2015-01-26 Thread
No,to confirm this I have run the command all my nodes:bin/nodetool
enableautocompaction

--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/

2015-01-26 16:49 GMT+08:00 Jason Wee peich...@gmail.com:

 Did you disable auto compaction through nodetool?

 disableautocompactionDisable autocompaction for the given keyspace
 and column family

 Jason

 On Mon, Jan 26, 2015 at 11:34 AM, 曹志富 cao.zh...@gmail.com wrote:

 Hi everybody:

 I have 18 nodes using cassandra2.1.2.Every node has 4 core, 32 GB RAM, 2T
 hard disk,OS is CentOS release 6.2 (Final).

 I have follow the Recommended production settings to config my
 system.such as disable SWAP,unlimited mem lock...

 My heap size is:

 MAX_HEAP_SIZE=8G
 MIN_HEAP_SIZE=8G
 HEAP_NEWSIZE=2G

 I use STCS,other config using default,using Datastax Java Driver 2.1.2.
 BatchStatment 100key commit per time.

 When I run my cluster and insert data from kafka (1 keys/s) after 2
 days,every node can't compact  some there too many sstables.

 I try to use major compact to compact the sstables , it cost a long long
 time .Also the new sstables can't compat automatic.


 I trace the log , the CMS GC too often,almost 30 minute onetime.

 Could someone help me to solve this problem.


 --
 曹志富
 手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/





SStables can't compat automaticly

2015-01-25 Thread
Hi everybody:

I have 18 nodes using cassandra2.1.2.Every node has 4 core, 32 GB RAM, 2T
hard disk,OS is CentOS release 6.2 (Final).

I have follow the Recommended production settings to config my
system.such as disable SWAP,unlimited mem lock...

My heap size is:

MAX_HEAP_SIZE=8G
MIN_HEAP_SIZE=8G
HEAP_NEWSIZE=2G

I use STCS,other config using default,using Datastax Java Driver 2.1.2.
BatchStatment 100key commit per time.

When I run my cluster and insert data from kafka (1 keys/s) after 2
days,every node can't compact  some there too many sstables.

I try to use major compact to compact the sstables , it cost a long long
time .Also the new sstables can't compat automatic.


I trace the log , the CMS GC too often,almost 30 minute onetime.

Could someone help me to solve this problem.


--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/