Driver Debug trace log

2019-04-10 Thread Rajasekhar Kommineni
Hi All,


We are seeing some timeouts for Web calls which are going to remote DC, so 
enabled debug tracing for DataStax java driver and getting below messages in 
the application log file.However not able to get the exact reason for the 
timeout. Need assistance in finding the same.

DataStax Java driver version : 3.3.1
Apache Cassandra Version :  3.11.1
Cluster Setup : 2 DC (4+4)


019-04-10T09:29:57,228 DEBUG [Connection] (cluster1-nio-worker-2:)  
Connection[/hostname:port-1, inFlight=0, closed=false] Response received on 
stream 6784 but no handler set anymore (either the request has timed out or it 
was closed due to another error). Received message is ROWS [1 columns]
 | 
0x7b22636172644964223a7b2266616d696c794964223a22353634222c2262616e6e65724964223a225f6361746368616c6c5f222c22636172644964223a223130303030363939353037227d2c226f657273223a5b7b226f65724964223a2231353338303838222c226f65725075624964223a223131303034363036323c2273636f7265223a33383231363739343031322c226461746552616e6765223a7b22666972737444617465223a22323031392d30342d3031222c226c61737444617465223a22323031392d30342d3330227d7d2c7b226f65724964223a2231353430303930222c22...
 [message of size 82098 truncated]

2019-04-10T09:29:57,226 ERROR [CassandraBucket] (qtp1395262169-485:) 
trn-daa3ac4a5713402bb79078a12733ae72 get caught OperationTimedOutException: 
tryNumber=1 id=primarykey 
class=com.datastax.driver.core.exceptions.OperationTimedOutException 
ex=[/hostname:port] Timed out waiting for server response

Thanks,



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Adding new Datacenter

2018-12-05 Thread Rajasekhar Kommineni
Hello everyone,

I am adding new DC to my existing cluster, with application using consistency 
of ONE. Will the new node of new DC participates in serving the requests during 
the bootstrapping/ rebuild ? I tested the scenario of building lost seed node 
with nodetool rebuild , where the binary service was not started until the 
rebuild completed.

Is there any difference b/w nodetool rebuild & nodetool rebuild -DC ?
 
Thanks,



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Long GC Pauses

2018-11-19 Thread Rajasekhar Kommineni
Hi All,

My C cluster configuration.

1) 2 DC with 4 nodes each and Replication Factor of 3 per each DC
2) Writes(Bulk data load) are done to 2nd  DC and Application (reads) are done 
from 1st DC.
3) CMS GC

Issue : Observing long GC pauses during data load and timeouts from application 
(reads) during the same time.

Question : 

1)Why am I seeing GC pauses on 1st DC , even though I am using 
stream_throughput of 16 Mb/s. 
2) Is there any way to reduce the GC pause times other than changing it.

Thanks,



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Mview in cassandra

2018-10-15 Thread rajasekhar kommineni
Hi,

I am seeing below warning message in system.log after datacopy using 
sstabloader. 

WARN  [CompactionExecutor:972] 2018-10-15 22:20:39,308 ViewBuilder.java:189 - 
Materialized View failed to complete, sleeping 5 minutes before restarting
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level ONE

I tried to drop the Materialized View and recreate it , but the data is not 
getting populated with version 3.11.1

I tried the same on version 3.11.2 on single node dev box and I can query the 
Materialized View with data. Any body have some experiences with Mview’s. 

Thanks,


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Data copy problem

2018-09-27 Thread rajasekhar kommineni
Just wanted to let you know running the command from all the nodes matched the 
count of records on both clusters

One more question, can any body know how to import the data in different 
keyspace using sstableloader

Thanks,


> On Sep 26, 2018, at 3:56 PM, rajasekhar kommineni  wrote:
> 
> I have to copy whole key space which has many tables, some tables are 30 GB. 
> Copy is going to take more and also load is shooting up.
> 
> I have a 4 node source cluster and 4 node cluster target cluster. My question 
> is do need to execute below command from all the nodes of source cluster ?
> 
> /bin/sstableloader -d node1,node2,node3,node4 ‘path to folder’
> 
> Thanks,
> 
> 
>> On Sep 26, 2018, at 3:43 PM, Kiran mk > <mailto:coolkiran2...@gmail.com>> wrote:
>> 
>> Please do try COPY TO commas to dump the data with csv or any other 
>> delimited format to dump.  Then try Run COPY FROM on target cluster after 
>> copying the exported file.  
>> 
>> Best Regards,
>> Kiran.M.K
>> 
>> 
>> On Thu, 27 Sep 2018 at 4:05 AM, rajasekhar kommineni > <mailto:rajaco...@gmail.com>> wrote:
>> Hi All,
>> 
>> I have a requirement to copy table data from cluster to another cluster, I 
>> used both sstableloader and copy snapshot, run nodetool refresh command. In 
>> both cases the count of records in new cluster is less than original 
>> cluster. 
>> I used select count(*) table name to count the records.
>> 
>> Any body have clue on how this works.
>> 
>> Thanks,
>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
>> <mailto:user-unsubscr...@cassandra.apache.org>
>> For additional commands, e-mail: user-h...@cassandra.apache.org 
>> <mailto:user-h...@cassandra.apache.org>
>> 
>> -- 
>> Best Regards,
>> Kiran.M.K.
> 



Re: Data copy problem

2018-09-26 Thread rajasekhar kommineni
I have to copy whole key space which has many tables, some tables are 30 GB. 
Copy is going to take more and also load is shooting up.

I have a 4 node source cluster and 4 node cluster target cluster. My question 
is do need to execute below command from all the nodes of source cluster ?

/bin/sstableloader -d node1,node2,node3,node4 ‘path to folder’

Thanks,


> On Sep 26, 2018, at 3:43 PM, Kiran mk  wrote:
> 
> Please do try COPY TO commas to dump the data with csv or any other delimited 
> format to dump.  Then try Run COPY FROM on target cluster after copying the 
> exported file.  
> 
> Best Regards,
> Kiran.M.K
> 
> 
> On Thu, 27 Sep 2018 at 4:05 AM, rajasekhar kommineni  <mailto:rajaco...@gmail.com>> wrote:
> Hi All,
> 
> I have a requirement to copy table data from cluster to another cluster, I 
> used both sstableloader and copy snapshot, run nodetool refresh command. In 
> both cases the count of records in new cluster is less than original cluster. 
> I used select count(*) table name to count the records.
> 
> Any body have clue on how this works.
> 
> Thanks,
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> <mailto:user-unsubscr...@cassandra.apache.org>
> For additional commands, e-mail: user-h...@cassandra.apache.org 
> <mailto:user-h...@cassandra.apache.org>
> 
> -- 
> Best Regards,
> Kiran.M.K.



Data copy problem

2018-09-26 Thread rajasekhar kommineni
Hi All,

I have a requirement to copy table data from cluster to another cluster, I used 
both sstableloader and copy snapshot, run nodetool refresh command. In both 
cases the count of records in new cluster is less than original cluster. 
I used select count(*) table name to count the records.

Any body have clue on how this works. 

Thanks,


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Compaction Strategy

2018-09-20 Thread rajasekhar kommineni
kups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 1
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 6
request_timeout_in_ms: 1
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 60
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
key_provider:
  - class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
  - keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 10
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
back_pressure_enabled: false
back_pressure_strategy:
- class_name: org.apache.cassandra.net.RateBasedBackPressure
  parameters:
- high_ratio: 0.90
  factor: 5
  flow: FAST
prd-relevancy-csdra1:/tmp >
On Sep 20, 2018, at 10:53 AM, Ali Hubail <ali.hub...@petrolink.com> wrote:Hello Rajasekhar,

It's not really clear to me what your
workload is. As I understand it, you do heavy writes, but what about reads?
So, could you:

1) execute 
nodetool tablestats 
nodetool tablehistograms
nodetool compactionstats

we should be able to see the latency,
workload type, and the # of sstable used for reads

2) specify your hardware specs. i.e.,
memory size, cpu, # of drives (for data sstables), and type of harddrives
(ssd/hdd)
3) cassandra.yaml (make sure to sanitize
it)

You have a lot of updates, and your
data is most likely scattered across different sstables. size compaction
strategy (STCS) is much less expensive than level compaction strategy (LCS).


Stopping the background compaction should
be approached with caution, I think your problem is more to do with why
STCS compaction is taking more resources than you expect.

Regards,

Ali Hubail

Petrolink International Ltd
Confidentiality warning: This message and any attachments are intended
only for the persons to whom this message is addressed, are confidential,
and may be privileged. If you are not the intended recipient, you are hereby
notified that any review, retransmission, conversion to hard copy, copying,
modification, circulation or other use of this message and any attachments
is strictly prohibited. If you receive this message in error, please notify
the sender immediately by return email, and delete this message and any
attachments from your system. Petrolink International Limited its subsidiaries,
holding companies and affiliates disclaims all responsibility from and
accepts no liability whatsoever for the consequences of any unauthorized
person acting, or refraining from acting, on any information contained
in this message. For security purposes, staff training, to assist in resolving
complaints and to improve our customer service, email communications may
be monitored and telephone calls may be recorded.





rajasekhar
kommineni <rajaco...@gmail.com> 09/19/2018 04:44 PM



Please respond to
user@cassandra.apache.org





To
user@cassandra.apache.org,



cc



Subject
Re:
Compaction Strategy








Hello,

Can any one respond to my questions. Is it a good idea to disable auto
compaction and schedule it every 3 days. I am unable to control compaction
and it is causing timeouts. 

Also will reducing or increasing compaction_throughput_mb_per_sec eliminate
timeouts ?

Thanks,


> On Sep 17, 2018, at 9:38 PM, rajasekhar kommineni <rajaco...@gmail.com>
wrote:
> 
> Hello Folks,
> 
> I need advice in deciding the compaction strategy for my C cluster.
There are multiple jobs that will load the data with less inserts and more
updates but no deletes. Currently I am using Size Tired compaction, but
seeing auto compact

Re: Compaction Strategy

2018-09-19 Thread rajasekhar kommineni
Hello,

Can any one respond to my questions. Is it a good idea to disable auto 
compaction and schedule it every 3 days. I am unable to control compaction and 
it is causing timeouts. 

Also will reducing or increasing compaction_throughput_mb_per_sec eliminate 
timeouts ?

Thanks,


> On Sep 17, 2018, at 9:38 PM, rajasekhar kommineni  wrote:
> 
> Hello Folks,
> 
> I need advice in deciding the compaction strategy for my C cluster. There are 
> multiple jobs that will load the data with less inserts and more updates but 
> no deletes. Currently I am using Size Tired compaction, but seeing auto 
> compactions after the data load kicks, and also read timeouts during 
> compaction.
> 
> Can anyone suggest good compaction strategy for my cluster which will reduce 
> the timeouts.
> 
> 
> Thanks,
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Compaction Strategy

2018-09-17 Thread rajasekhar kommineni
Hello Folks,

I need advice in deciding the compaction strategy for my C cluster. There are 
multiple jobs that will load the data with less inserts and more updates but no 
deletes. Currently I am using Size Tired compaction, but seeing auto 
compactions after the data load kicks, and also read timeouts during compaction.

Can anyone suggest good compaction strategy for my cluster which will reduce 
the timeouts.


Thanks,


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Stress Test

2018-09-06 Thread rajasekhar kommineni
Hello Folks,

Does any body refer good documentation on Cassandra stress test. 

I have below questions.

1) Which server is good to start the test, Cassandra server or Application 
server.
2) I am using Datastax Java driver, is any good documentation for stress test 
specific to this driver.
3) How to analyze the stress test output.

Thanks,


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Switching Snitch

2018-08-26 Thread rajasekhar kommineni
Hi Pradeep,

For changing the snitch you have decommission and add the node with new switch 
and updated properties files. 

Thanks,


> On Aug 26, 2018, at 2:15 PM, Joshua Galbraith 
>  wrote:
> 
> Pradeep,
> 
> Here are some related tickets that may also be helpful in understanding the 
> current behavior of these options.
> 
> * https://issues.apache.org/jira/browse/CASSANDRA-5897 
> 
> * https://issues.apache.org/jira/browse/CASSANDRA-9474 
> 
> * https://issues.apache.org/jira/browse/CASSANDRA-10243 
> 
> * https://issues.apache.org/jira/browse/CASSANDRA-10242 
> 
> 
> On Sun, Aug 26, 2018 at 1:20 PM, Joshua Galbraith  > wrote:
> Pradeep,
> 
> That being said, I haven't experimented with -Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true before.
> 
> The description here may be helpful:
> https://github.com/apache/cassandra/blob/trunk/NEWS.txt#L685-L693 
> 
> 
> I would spin up a small test cluster with data you don't care about and 
> verify that your above assumptions are correct there first.
> 
> On Sun, Aug 26, 2018 at 1:09 PM, Joshua Galbraith  > wrote:
> Pradeep.
> 
> Right, so from that documentation is sounds like you actually have to stop 
> all nodes in the cluster at once and bring them back up one at a time. A 
> rolling restart won't work here.
> 
> On Sun, Aug 26, 2018 at 11:46 AM, Pradeep Chhetri  > wrote:
> Hi Joshua,
> 
> Thank you for the reply. Sorry i forgot to mention that I already went 
> through that documentation. There are few missing things regarding which I 
> have few questions:
> 
> 1) One thing which isn't mentioned there is that cassandra fails to restart 
> when we change the datacenter name or rack name of a node. So whether should 
> i first rolling restart cassandra with flag "-Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true", then run sequential repair and then cleanup 
> and then rolling restart cassandra without that flag.
> 
> 2) Should i not allow any read/write operation from applications during the 
> time when sequential repair is running.
> 
> Regards,
> Pradeep
> 
> On Mon, Aug 27, 2018 at 12:19 AM, Joshua Galbraith 
> mailto:jgalbra...@newrelic.com.invalid>> 
> wrote:
> Pradeep, it sounds like what you're proposing counts as a topology change 
> because you are changing the datacenter name and rack name.
> 
> Please refer to the documentation here about what to do in that situation:
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html
>  
> 
> 
> In particular:
> 
> Simply altering the snitch and replication to move some nodes to a new 
> datacenter will result in data being replicated incorrectly.
> 
> Topology changes may occur when the replicas are placed in different places 
> by the new snitch. Specifically, the replication strategy places the replicas 
> based on the information provided by the new snitch.
> 
> If the topology of the network has changed, but no datacenters are added:
> a. Shut down all the nodes, then restart them.
> b. Run a sequential repair and nodetool cleanup on each node.
> 
> On Sun, Aug 26, 2018 at 11:14 AM, Pradeep Chhetri  > wrote:
> Hello everyone,
> 
> Since i didn't hear from anyone, just want to describe my question again:
> 
> Am i correct in understanding that i need to do following steps to migrate 
> data from SimpleSnitch to GPFS changing datacenter name and rack name to AWS 
> region and Availability zone respectively
> 
> 1) Update the rack and datacenter fields in cassandra-rackdc.properties file 
> and rolling restart cassandra with this flag "-Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true"
> 
> 2) Run nodetool repair --sequential and nodetool cleanup.
> 
> 3) Rolling restart cassandra removing the flag  "-Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true"
> 
> Regards,
> Pradeep
> 
> On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri  > wrote:
> Hello,
> 
> I am currently running a 3.11.2 cluster in SimpleSnitch hence the datacenter 
> is datacenter1 and rack is rack1 for all nodes on AWS. I want to switch to 
> GPFS by changing the rack name to the availability-zone name and datacenter 
> name to region name.
> 
> When I try to restart individual nodes by changing those values, it failed to 
> start throwing the error about dc and rack name mismatch but gives me an 
> option to set ignore_dc and ignore_rack to true to bypass it.
> 
> I am not sure if it is safe to set those two flags to true and if there is 
> any drawback 

Cassandra caches

2018-08-10 Thread rajasekhar kommineni
Hi All,

I had allocated 2 GB each for Key, Row, Counter & Chunk cache and performed 
below steps. Please note it is test box not others users are connected to it.

Output 1 shows 0 hits and 0 requests - After clean startup of cassandra
Output 2 shows 0 hits and 1 requests - Executed a select query with returns 1 
row (Json format)
Output 3 shows 1 hits and 2 requests - Reexecuted the same select query in 
step2.

My questions are 
 
1) Is there any way to preload all the rows to row cache without executing the 
select statements. 
2) Regarding Key cache I only selected col2 i.e value but requests increased by 
4 i.e to 102 from 98.

Can any one explain on the above.

cqlsh:> desc table

CREATE TABLE table (
Col1 text PRIMARY KEY,
Col2 text. — (Json format)
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

Output1:

Hostname:/Users/> nodetool info
ID : 9b10a667-b668-44c4-8deb-2e0ad317f287
Gossip active  : true
Thrift active  : false
Native Transport active: true
Load   : 488.54 MiB
Generation No  : 1533931780
Uptime (seconds)   : 20
Heap Memory (MB)   : 572.35 / 12208.00
Off Heap Memory (MB)   : 0.20
Data Center: datacenter1
Rack   : rack1
Exceptions : 0
Key Cache  : entries 26, size 2.3 KiB, capacity 2 GiB, 62 hits, 98 
requests, 0.633 recent hit rate, 60 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 2 GiB, 0 hits, 0 
requests, NaN recent hit rate, 60 save period in seconds
Counter Cache  : entries 0, size 0 bytes, capacity 2 GiB, 0 hits, 0 
requests, NaN recent hit rate, 60 save period in seconds
Chunk Cache: entries 19, size 1.19 MiB, capacity 1.97 GiB, 62 
misses, 186 requests, 0.667 recent hit rate, 233.240 microseconds miss latency
Percent Repaired   : 0.0%
Token  : (invoke with -T/--tokens to see all 256 tokens)
rkommineni-mac.local:/Users/rkommineni >

Select col2 from table where col1=key;

Output2:

Hostname:/Users/> nodetool info
ID : 9b10a667-b668-44c4-8deb-2e0ad317f287
Gossip active  : true
Thrift active  : false
Native Transport active: true
Load   : 488.54 MiB
Generation No  : 1533931780
Uptime (seconds)   : 68
Heap Memory (MB)   : 627.20 / 12208.00
Off Heap Memory (MB)   : 0.20
Data Center: datacenter1
Rack   : rack1
Exceptions : 0
Key Cache  : entries 28, size 2.5 KiB, capacity 2 GiB, 64 hits, 102 
requests, 0.627 recent hit rate, 60 save period in seconds
Row Cache  : entries 1, size 40.07 KiB, capacity 2 GiB, 0 hits, 1 
requests, 0.000 recent hit rate, 60 save period in seconds
Counter Cache  : entries 0, size 0 bytes, capacity 2 GiB, 0 hits, 0 
requests, NaN recent hit rate, 60 save period in seconds
Chunk Cache: entries 24, size 1.5 MiB, capacity 1.97 GiB, 67 
misses, 225 requests, 0.702 recent hit rate, 285.506 microseconds miss latency
Percent Repaired   : 0.0%
Token  : (invoke with -T/--tokens to see all 256 tokens)

Select col2 from table where col1=key;  - Reran the same query

Output3:

Hostname:/Users/> nodetool info
ID : 9b10a667-b668-44c4-8deb-2e0ad317f287
Gossip active  : true
Thrift active  : false
Native Transport active: true
Load   : 488.54 MiB
Generation No  : 1533931780
Uptime (seconds)   : 78
Heap Memory (MB)   : 651.93 / 12208.00
Off Heap Memory (MB)   : 0.20
Data Center: datacenter1
Rack   : rack1
Exceptions : 0
Key Cache  : entries 28, size 2.5 KiB, capacity 2 GiB, 64 hits, 102 
requests, 0.627 recent hit rate, 60 save period in seconds
Row Cache  : entries 1, size 40.07 KiB, capacity 2 GiB, 1 hits, 2 
requests, 0.500 recent hit rate, 60 save period in seconds
Counter Cache  : entries 0, size 0 bytes, capacity 2 GiB, 0 hits, 0 
requests, NaN recent hit rate, 60 save period in seconds
Chunk Cache: entries 24, size 1.5 MiB, capacity 1.97 GiB, 67 
misses, 225 requests, 0.702 recent hit rate, 208.327 microseconds miss latency
Percent Repaired   : 0.0%
Token  : (invoke with -T/--tokens to see 

Server kernal Parameters for cassandra

2018-07-29 Thread rajasekhar kommineni
Hello,

Do we have any standard values for server kernel parameters to run Cassandra. 
Please share some insight.

Thanks,


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra Repair

2018-07-17 Thread rajasekhar kommineni
nodetool tablestats has an attribute for Percent repaired, can we target the 
tables based the % given.



> On Jul 17, 2018, at 4:45 AM, Rahul Singh  wrote:
> 
> Have you considered looking into reaper project — could save you time in 
> figuring out your own strategy. 
> https://github.com/thelastpickle/cassandra-reaper 
> <https://github.com/thelastpickle/cassandra-reaper>
> 
> Otherwise you can always do a round robin of cron jobs per node once a week… 
> Your repair cycle should repair all servers within a window less than your 
> shortest GC grace seconds. 
> 
> So if you have a GC of 10 days, you want to complete your repairs in 9 days… 
> 
> 
> 
> --
> Rahul Singh
> rahul.si...@anant.us
> 
> Anant Corporation
> On Jul 16, 2018, 5:15 PM -0400, rajasekhar kommineni , 
> wrote:
>> Hello All,
>> 
>> 
>> I have all cluster nodes in Cloud, and there is very rare chance for nodes 
>> going down. I want to prepare repair strategy to my cluster, so need inputs 
>> on any calculations to decide when to go for repair.
>> 
>> Also let me know if my statement is correct or not "It’s not only node down 
>> time,but the write consistency level is also a factor for regular repairs”.
>> 
>> 
>> Thanks,
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 



Cassandra Repair

2018-07-16 Thread rajasekhar kommineni
Hello All,


I have all cluster nodes in Cloud, and there is very rare chance for nodes 
going down. I want to prepare repair strategy to my cluster, so need inputs on 
any calculations to decide when to go for repair.  

Also let me know if my statement is correct or not "It’s not only node down 
time,but the write consistency level is also a factor for regular repairs”.


Thanks,
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Best approach for node decommission

2018-07-12 Thread rajasekhar kommineni
Hi All,

Can anybody let me know best approach for decommissiong a node in the cluster. 
My cluster is using vnodes, is there any way to verify all the data of the 
decommissioning node has been moved to remaining nodes, before completely 
shutting down the server.

I followed below procedure :

1) nodetool flush
2) nodetool repair
3) nodetool decommission

The aggregate of Load before node3 decommission is 1411.47 and after is 
1380.15. Can I ignore the size difference and treat all the data of node3 has 
been moved to other nodes.

I am looking for good data validation process with out depending on Application 
team for verification.

Total load : 1411.47

– Address Load Tokens Owns Host ID Rack
UN node1 220.48 MiB 256 ? ff09b08b-29c1-4365-a3b7-1eea51f7d575 rack1
UN node2 216.53 MiB 256 ? 4b565a31-4c77-418f-a47f-5e0eb2ec5624 rack1
UN node3 64.52  MiB 256 ? 12b29812-cc60-456c-95a9-0e339c249bc8 rack1
UN node4 195.84 MiB 256 ? 0424a882-de4f-4e6a-b642-6ce9f4621e04 rack1
UN node5 179.07 MiB 256 ? 2f291a2e-b10d-4364-8192-13e107a9c322 rack1
UN node6 213.75 MiB 256 ? cf10166b-cfae-44fd-8bca-f55a4f9ef491 rack1
UN node7 158.54 MiB 256 ? ef8454c7-3005-487a-a3d4-e0065edfd99f rack1
UN node8 162.74 MiB 256 ? 7d786e46-1c11-485c-a943-bbcca6729ae1 rack1

Total Load : 1380.15

– Address Load Tokens Owns Host ID Rack
UN node1 229.04 MiB 256 ? ff09b08b-29c1-4365-a3b7-1eea51f7d575 rack1
UN node2 225.52 MiB 256 ? 4b565a31-4c77-418f-a47f-5e0eb2ec5624 rack1
UN node4 195.84 MiB 256 ? 0424a882-de4f-4e6a-b642-6ce9f4621e04 rack1
UN node5 179.07 MiB 256 ? 2f291a2e-b10d-4364-8192-13e107a9c322 rack1
UN node6 229.4  MiB 256 ? cf10166b-cfae-44fd-8bca-f55a4f9ef491 rack1
UN node7 158.54 MiB 256 ? ef8454c7-3005-487a-a3d4-e0065edfd99f rack1
UN node8 162.74 MiB 256 ? 7d786e46-1c11-485c-a943-bbcca6729ae1 rack1

Thanks,



Re: Installation

2018-07-10 Thread rajasekhar kommineni
Thanks Michael, While I agree with the advantage of symlinks , I am worried for 
future upgrades.

My concern here is how to unlink the Cassandra binaries like nodetool,cassandra 
,cqlsh etc after migrating to tar gz installation.

Thanks,


> On Jul 10, 2018, at 5:46 AM, Michael Shuler  wrote:
> 
> On 07/10/2018 02:48 AM, rajasekhar kommineni wrote:
>> Hi Rahul,
>> 
>> The problem for removing the old links is Cassandra binaries are pointed
>> from /usr//bin/, /usr//sbin etc ..
>> 
>> $ which nodetool 
>> /usr/bin/nodetool
>> $ which cqlsh
>> /usr/bin/cqlsh
>> $ which cassandra
>> /usr/sbin/cassandra
> 
> This is a basic linux usage thing, not really a cassandra problem, but
> it's why packages make things simple for general use - the default
> /usr/{s}bin locations are in $PATH. If you wish to have nodetool, etc.
> in your user's $PATH, just update the user's shell configuration to
> include the tar locations.
> 
> export CASSANDRA_HOME=
> export PATH="$CASSANDRA_HOME/bin:$CASSANDRA_HOME/tools/bin:$PATH"
> 
> This can be added to the bottom of ~/.bashrc for persistence. Bonus
> points for symlink of generic cassandra_home to versioned one, which is
> used for upgrades without messing with PATH env for user and within
> configs for Cassandra.
> 
> -- 
> Michael
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Installation

2018-07-10 Thread rajasekhar kommineni
Hi Rahul,

The problem for removing the old links is Cassandra binaries are pointed from 
/usr//bin/, /usr//sbin etc ..

$ which nodetool 
/usr/bin/nodetool
$ which cqlsh
/usr/bin/cqlsh
$ which cassandra
/usr/sbin/cassandra
$ 


Thanks,


> On Jul 10, 2018, at 12:28 AM, Rahul Singh  
> wrote:
> 
> That approach will work, however that may take a long time. 
> 
> The important things that are unique to your cluster will be your 
> configuration files & your data /log  directories. 
> 
> The binaries can be placed on the same machines via tar installation. While 
> keeping the machines running on the old binaries, you can migrate the data / 
> log new directories. If you move your data, you can use links in linux to 
> point the old directories to the new locations. 
> 
> Once this is done, you can configure your tar installation to point to your 
> new data directories, and turn off the old binaries and turn on the new 
> binaries, one node at a time. 
> 
> 
> --
> Rahul Singh
> rahul.si...@anant.us
> 
> Anant Corporation
> On Jul 9, 2018, 6:35 PM -0500, rajpal reddy , wrote:
>> We have our infrastructure in cloud so opted for adding new dc with tar.gz 
>> then removed the old dc with package installation
>> 
>> Sent from my iPhone
>> 
>>> On Jul 9, 2018, at 2:23 PM, rajasekhar kommineni  
>>> wrote:
>>> 
>>> Hello All,
>>> 
>>> I have a cassandra cluster where package installation is done, I want to 
>>> convert it to tar.gz installation. Is there any procedure to follow.
>>> 
>>> Thanks,
>>> Rajasekhar Kommineni
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 



Installation

2018-07-09 Thread rajasekhar kommineni
Hello All,

I have a cassandra cluster where package installation is done, I want to 
convert it to tar.gz installation. Is there any procedure to follow.

Thanks,
Rajasekhar Kommineni


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org