Re: C* 2.2.7 ?

2016-06-29 Thread horschi
Awesome! There is a lot of good stuff in 2.2.7 :-)

On Wed, Jun 29, 2016 at 5:37 PM, Tyler Hobbs  wrote:

> 2.2.7 just got tentatively tagged yesterday.  So, there should be a vote
> on releasing it shortly.
>
> On Wed, Jun 29, 2016 at 8:24 AM, Dominik Keil 
> wrote:
>
>> +1
>>
>> there's some bugs fixed we might be or sure are affected by and the
>> change log has become quite large already mind voting von 2.2.7 soon?
>>
>>
>> Am 21.06.2016 um 15:31 schrieb horschi:
>>
>> Hi,
>>
>> are there any plans to release 2.2.7 any time soon?
>>
>> kind regards,
>> Christian
>>
>>
>> --
>> *Dominik Keil*
>> Phone: + 49 (0) 621 150 207 31
>> Mobile: + 49 (0) 151 626 602 14
>>
>> Movilizer GmbH
>> Konrad-Zuse-Ring 30
>> 68163 Mannheim
>> Germany
>>
>> movilizer.com
>>
>> [image: Visit company website] 
>> *Reinvent Your Mobile Enterprise*
>>
>> *-Movilizer is moving*
>> After June 27th 2016 Movilizer's new headquarter will be
>>
>>
>>
>>
>> *EASTSITE VIIIKonrad-Zuse-Ring 3068163 Mannheim*
>>
>> 
>> 
>>
>> *Be the first to know:*
>> Twitter  | LinkedIn
>>  | Facebook
>>  | stack overflow
>> 
>>
>> Company's registered office: Mannheim HRB: 700323 / Country Court:
>> Mannheim Managing Directors: Alberto Zamora, Jörg Bernauer, Oliver Lesche
>> Please inform us immediately if this e-mail and/or any attachment was
>> transmitted incompletely or was not intelligible.
>>
>> This e-mail and any attachment is for authorized use by the intended
>> recipient(s) only. It may contain proprietary material, confidential
>> information and/or be subject to legal privilege. It should not be
>> copied, disclosed to, retained or used by any other party. If you are not
>> an intended recipient then please promptly delete this e-mail and any
>> attachment and all copies and inform the sender.
>
>
>
>
> --
> Tyler Hobbs
> DataStax 
>


Re: Changing a cluster name

2016-06-29 Thread Paul Fife
Thanks Dominik - I was doing a nodetool flush like the instructions said,
but it wasn't actually flushing the system keyspace. Using nodetool flush
system made it work as expected!

Thanks,
Paul Fife

On Wed, Jun 29, 2016 at 7:37 AM, Dominik Keil 
wrote:

> Also you might want to explicitly do "nodetool flush system". I've
> recently done this in C* 2.2.6 and just "nodetool flush" would not have
> flushed the system keyspace, leading to the change in cluster name not
> being persisted across restarts.
>
> Cheers
>
>
> Am 29.06.2016 um 03:36 schrieb Surbhi Gupta:
>
> system.local uses local strategy . You need to update on all nodes .
>
>
> On 28 June 2016 at 14:51, Tyler Hobbs  wrote:
>
>> First, make sure that you call nodetool flush after modifying the system
>> table.  That's probably why it's not surviving the restart.
>>
>> Second, I believe you will have to do this across all nodes and restart
>> them at the same time.  Otherwise, cluster name mismatches will prevent the
>> nodes from communicating with each other.
>>
>> On Fri, Jun 24, 2016 at 3:51 PM, Paul Fife  wrote:
>>
>>> I am following the instructions here to attempt to change the name of a
>>> cluster: https://wiki.apache.org/cassandra/FAQ#clustername_mismatch
>>> or at least the more up to date advice:
>>> 
>>> http://stackoverflow.com/questions/22006887/cassandra-saved-cluster-name-test-cluster-configured-name
>>>
>>> I am able to query the system.local to verify the clusterName is
>>> modified, but when I restart Cassandra it fails, and the value is back at
>>> the original cluster name. Is this still possible, or are there changes
>>> preventing this from working anymore?
>>>
>>> I have attempted this several times and it did actually work the first
>>> time, but when I moved around to the other nodes it no longer worked.
>>>
>>> Thanks,
>>> Paul Fife
>>>
>>>
>>
>>
>> --
>> Tyler Hobbs
>> DataStax 
>>
>
>
> --
> *Dominik Keil*
> Phone: + 49 (0) 621 150 207 31
> Mobile: + 49 (0) 151 626 602 14
>
> Movilizer GmbH
> Konrad-Zuse-Ring 30
> 68163 Mannheim
> Germany
>
> movilizer.com
>
> [image: Visit company website] 
> *Reinvent Your Mobile Enterprise*
>
> *-Movilizer is moving*
> After June 27th 2016 Movilizer's new headquarter will be
>
>
>
>
> *EASTSITE VIIIKonrad-Zuse-Ring 3068163 Mannheim*
>
> 
> 
>
> *Be the first to know:*
> Twitter  | LinkedIn
>  | Facebook
>  | stack overflow
> 
>
> Company's registered office: Mannheim HRB: 700323 / Country Court:
> Mannheim Managing Directors: Alberto Zamora, Jörg Bernauer, Oliver Lesche
> Please inform us immediately if this e-mail and/or any attachment was
> transmitted incompletely or was not intelligible.
>
> This e-mail and any attachment is for authorized use by the intended
> recipient(s) only. It may contain proprietary material, confidential
> information and/or be subject to legal privilege. It should not be
> copied, disclosed to, retained or used by any other party. If you are not
> an intended recipient then please promptly delete this e-mail and any
> attachment and all copies and inform the sender.


Re: C* 2.2.7 ?

2016-06-29 Thread Tyler Hobbs
2.2.7 just got tentatively tagged yesterday.  So, there should be a vote on
releasing it shortly.

On Wed, Jun 29, 2016 at 8:24 AM, Dominik Keil 
wrote:

> +1
>
> there's some bugs fixed we might be or sure are affected by and the
> change log has become quite large already mind voting von 2.2.7 soon?
>
>
> Am 21.06.2016 um 15:31 schrieb horschi:
>
> Hi,
>
> are there any plans to release 2.2.7 any time soon?
>
> kind regards,
> Christian
>
>
> --
> *Dominik Keil*
> Phone: + 49 (0) 621 150 207 31
> Mobile: + 49 (0) 151 626 602 14
>
> Movilizer GmbH
> Konrad-Zuse-Ring 30
> 68163 Mannheim
> Germany
>
> movilizer.com
>
> [image: Visit company website] 
> *Reinvent Your Mobile Enterprise*
>
> *-Movilizer is moving*
> After June 27th 2016 Movilizer's new headquarter will be
>
>
>
>
> *EASTSITE VIIIKonrad-Zuse-Ring 3068163 Mannheim*
>
> 
> 
>
> *Be the first to know:*
> Twitter  | LinkedIn
>  | Facebook
>  | stack overflow
> 
>
> Company's registered office: Mannheim HRB: 700323 / Country Court:
> Mannheim Managing Directors: Alberto Zamora, Jörg Bernauer, Oliver Lesche
> Please inform us immediately if this e-mail and/or any attachment was
> transmitted incompletely or was not intelligible.
>
> This e-mail and any attachment is for authorized use by the intended
> recipient(s) only. It may contain proprietary material, confidential
> information and/or be subject to legal privilege. It should not be
> copied, disclosed to, retained or used by any other party. If you are not
> an intended recipient then please promptly delete this e-mail and any
> attachment and all copies and inform the sender.




-- 
Tyler Hobbs
DataStax 


Re: Changing a cluster name

2016-06-29 Thread Dominik Keil
Also you might want to explicitly do "nodetool flush system". I've 
recently done this in C* 2.2.6 and just "nodetool flush" would not have 
flushed the system keyspace, leading to the change in cluster name not 
being persisted across restarts.


Cheers

Am 29.06.2016 um 03:36 schrieb Surbhi Gupta:

system.local uses local strategy . You need to update on all nodes .


On 28 June 2016 at 14:51, Tyler Hobbs > wrote:


First, make sure that you call nodetool flush after modifying the
system table.  That's probably why it's not surviving the restart.

Second, I believe you will have to do this across all nodes and
restart them at the same time.  Otherwise, cluster name mismatches
will prevent the nodes from communicating with each other.

On Fri, Jun 24, 2016 at 3:51 PM, Paul Fife > wrote:

I am following the instructions here to attempt to change the
name of a cluster:
https://wiki.apache.org/cassandra/FAQ#clustername_mismatch
or at least the more up to date advice:

http://stackoverflow.com/questions/22006887/cassandra-saved-cluster-name-test-cluster-configured-name

I am able to query the system.local to verify the clusterName
is modified, but when I restart Cassandra it fails, and the
value is back at the original cluster name. Is this still
possible, or are there changes preventing this from working
anymore?

I have attempted this several times and it did actually work
the first time, but when I moved around to the other nodes it
no longer worked.

Thanks,
Paul Fife




-- 
Tyler Hobbs

DataStax 




--
*Dominik Keil*
Phone: + 49 (0) 621 150 207 31
Mobile: + 49 (0) 151 626 602 14

Movilizer GmbH
Konrad-Zuse-Ring 30
68163 Mannheim
Germany

--
movilizer.com

[image: Visit company website] 
*Reinvent Your Mobile Enterprise*

*-Movilizer is moving*
After June 27th 2016 Movilizer's new headquarter will be




*EASTSITE VIIIKonrad-Zuse-Ring 3068163 Mannheim*




*Be the first to know:*
Twitter  | LinkedIn 
 | Facebook 
 | stack overflow 



Company's registered office: Mannheim HRB: 700323 / Country Court: Mannheim 
Managing Directors: Alberto Zamora, Jörg Bernauer, Oliver Lesche Please 
inform us immediately if this e-mail and/or any attachment was transmitted 
incompletely or was not intelligible.


This e-mail and any attachment is for authorized use by the intended 
recipient(s) only. It may contain proprietary material, confidential 
information and/or be subject to legal privilege. It should not be 
copied, disclosed to, retained or used by any other party. If you are not 
an intended recipient then please promptly delete this e-mail and any 
attachment and all copies and inform the sender.


Re: Problems with nodetool

2016-06-29 Thread Ralf Meier
Yes. 


> Am 29.06.2016 um 15:48 schrieb Sebastian Estevez 
> :
> 
> Did you mean `nodetool status` not `node-tool status` ?
> 
> All the best,
> 
>  
> Sebastián Estévez
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com 
> 
>    
>     
>   
> 
>  
> 
>  
> 
> DataStax is the fastest, most scalable distributed database technology, 
> delivering Apache Cassandra to the world’s most innovative enterprises. 
> Datastax is built to be agile, always-on, and predictably scalable to any 
> size. With more than 500 customers in 45 countries, DataStax is the database 
> technology and transactional backbone of choice for the worlds most 
> innovative companies such as Netflix, Adobe, Intuit, and eBay. 
> 
> On Wed, Jun 29, 2016 at 6:42 AM, Ralf Meier  > wrote:
> Hi everybody,
> 
> I tried to install a cassandra cluster using docker (official image) on 6 
> different machines. (Each physical machine will host one docker container).
> Each physical node has two network cards. One for an „internal network“ where 
> the cassandra cluster should use for communication. (IP: 10.20.39.1 to 
> x.x.x.6)
> Through one port conflict on the host machine I had to change the port 7000 
> in cassandra,yaml to 7002 for the communication between the nodes.
> 
> The docker containers spined up with out any issues. On each node.
> 
> Now I tried to check if all nodes could communicate to each other by using 
> the "node-tool status“ command. But always when I entered the command I
> got as output only the help information how to use the node-tool.  (Even if I 
> add -p 7002 it does not help)
> I did not get any status about the cluster.
> 
> So from now, I did not find anything in the logs, but I could also not check 
> the status of the cluster.
> 
> Did somebody from have an idea how to change the configuration or what I have 
> to change that the cluster is working?
> 
> Thanks for your help
> BR
> Ralf
> 
> 
> 
> Attached find the configuration which where set in cassandra.yaml (from node 
> 1 which should also act as seed node)
> cluster_name: 'TestCluster'
> num_tokens: 256
> max_hint_window_in_ms: 1080 # 3 hours
> hinted_handoff_throttle_in_kb: 1024
> max_hints_delivery_threads: 2
> hints_flush_period_in_ms: 1
> max_hints_file_size_in_mb: 128
> batchlog_replay_throttle_in_kb: 1024
> authenticator: AllowAllAuthenticator
> authorizer: AllowAllAuthorizer
> role_manager: CassandraRoleManager
> roles_validity_in_ms: 2000
> permissions_validity_in_ms: 2000
> credentials_validity_in_ms: 2000
> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> data_file_directories:
> - /var/lib/cassandra/data
> commitlog_directory: /var/lib/cassandra/commitlog
> disk_failure_policy: stop
> commit_failure_policy: stop
> prepared_statements_cache_size_mb:
> thrift_prepared_statements_cache_size_mb:
> key_cache_size_in_mb:
> key_cache_save_period: 14400
> row_cache_size_in_mb: 0
> row_cache_save_period: 0
> counter_cache_size_in_mb:
> counter_cache_save_period: 7200
> saved_caches_directory: /var/lib/cassandra/saved_caches
> commitlog_sync: periodi
> commitlog_sync_period_in_ms: 1
> commitlog_segment_size_in_mb: 32
> seed_provider:
> - class_name: org.apache.cassandra.locator.SimpleSeedProvider
>   parameters:
>   - seeds: "10.20.39.1"
> concurrent_reads: 32
> concurrent_writes: 32
> concurrent_counter_writes: 32
> concurrent_materialized_view_writes: 32
> memtable_allocation_type: heap_buffers
> index_summary_capacity_in_mb:
> index_summary_resize_interval_in_minutes: 60
> trickle_fsync: false
> trickle_fsync_interval_in_kb: 10240
> storage_port: 7002
> ssl_storage_port: 7001
> listen_address: 10.20.39.1
> broadcast_address: 10.20.39.1
> start_rpc: false
> rpc_address: 0.0.0.0
> rpc_port: 9160
> broadcast_rpc_address: 10.20.39.1
> rpc_keepalive: true
> rpc_server_type: sync
> thrift_framed_transport_size_in_mb: 15
> incremental_backups: false
> snapshot_before_compaction: false
> auto_snapshot: true
> column_index_size_in_kb: 64
> column_index_cache_size_in_kb: 2
> compaction_throughput_mb_per_sec: 16
> sstable_preemptive_open_interval_in_mb: 50
> read_request_timeout_in_ms: 5000
> range_request_timeout_in_ms: 1
> write_request_timeout_in_ms: 2000
> counter_write_request_timeout_in_ms: 5000
> cas_contention_timeout_in_ms: 1000
> truncate_request_timeout_in_ms: 6
> request_timeout_in_ms: 1
> cross_node_timeout: false
> endpoint_snitch: SimpleSnitch
> dynamic_snitch_update_interval_in_ms: 100
> dynamic_snitch_reset_interval_in_ms: 60
> dynamic_snitch_badness_threshold: 0.1
> 

Re: Problems with nodetool

2016-06-29 Thread Sebastian Estevez
Did you mean `nodetool status` not `node-tool status` ?

All the best,


[image: datastax_logo.png] 

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

[image: linkedin.png]  [image:
facebook.png]  [image: twitter.png]
 [image: g+.png]







DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Wed, Jun 29, 2016 at 6:42 AM, Ralf Meier  wrote:

> Hi everybody,
>
> I tried to install a cassandra cluster using docker (official image) on 6
> different machines. (Each physical machine will host one docker container).
> Each physical node has two network cards. One for an „internal network“
> where the cassandra cluster should use for communication. (IP: 10.20.39.1
> to x.x.x.6)
> Through one port conflict on the host machine I had to change the port
> 7000 in cassandra,yaml to 7002 for the communication between the nodes.
>
> The docker containers spined up with out any issues. On each node.
>
> Now I tried to check if all nodes could communicate to each other by using
> the "node-tool status“ command. But always when I entered the command I
> got as output only the help information how to use the node-tool.  (Even
> if I add -p 7002 it does not help)
> I did not get any status about the cluster.
>
> So from now, I did not find anything in the logs, but I could also not
> check the status of the cluster.
>
> Did somebody from have an idea how to change the configuration or what I
> have to change that the cluster is working?
>
> Thanks for your help
> BR
> Ralf
>
>
>
> Attached find the configuration which where set in cassandra.yaml (from
> node 1 which should also act as seed node)
> cluster_name: 'TestCluster'
> num_tokens: 256
> max_hint_window_in_ms: 1080 # 3 hours
> hinted_handoff_throttle_in_kb: 1024
> max_hints_delivery_threads: 2
> hints_flush_period_in_ms: 1
> max_hints_file_size_in_mb: 128
> batchlog_replay_throttle_in_kb: 1024
> authenticator: AllowAllAuthenticator
> authorizer: AllowAllAuthorizer
> role_manager: CassandraRoleManager
> roles_validity_in_ms: 2000
> permissions_validity_in_ms: 2000
> credentials_validity_in_ms: 2000
> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> data_file_directories:
> - /var/lib/cassandra/data
> commitlog_directory: /var/lib/cassandra/commitlog
> disk_failure_policy: stop
> commit_failure_policy: stop
> prepared_statements_cache_size_mb:
> thrift_prepared_statements_cache_size_mb:
> key_cache_size_in_mb:
> key_cache_save_period: 14400
> row_cache_size_in_mb: 0
> row_cache_save_period: 0
> counter_cache_size_in_mb:
> counter_cache_save_period: 7200
> saved_caches_directory: /var/lib/cassandra/saved_caches
> commitlog_sync: periodi
> commitlog_sync_period_in_ms: 1
> commitlog_segment_size_in_mb: 32
> seed_provider:
> - class_name: org.apache.cassandra.locator.SimpleSeedProvider
>   parameters:
>   - seeds: "10.20.39.1"
> concurrent_reads: 32
> concurrent_writes: 32
> concurrent_counter_writes: 32
> concurrent_materialized_view_writes: 32
> memtable_allocation_type: heap_buffers
> index_summary_capacity_in_mb:
> index_summary_resize_interval_in_minutes: 60
> trickle_fsync: false
> trickle_fsync_interval_in_kb: 10240
> storage_port: 7002
> ssl_storage_port: 7001
> listen_address: 10.20.39.1
> broadcast_address: 10.20.39.1
> start_rpc: false
> rpc_address: 0.0.0.0
> rpc_port: 9160
> broadcast_rpc_address: 10.20.39.1
> rpc_keepalive: true
> rpc_server_type: sync
> thrift_framed_transport_size_in_mb: 15
> incremental_backups: false
> snapshot_before_compaction: false
> auto_snapshot: true
> column_index_size_in_kb: 64
> column_index_cache_size_in_kb: 2
> compaction_throughput_mb_per_sec: 16
> sstable_preemptive_open_interval_in_mb: 50
> read_request_timeout_in_ms: 5000
> range_request_timeout_in_ms: 1
> write_request_timeout_in_ms: 2000
> counter_write_request_timeout_in_ms: 5000
> cas_contention_timeout_in_ms: 1000
> truncate_request_timeout_in_ms: 6
> request_timeout_in_ms: 1
> cross_node_timeout: false
> endpoint_snitch: SimpleSnitch
> dynamic_snitch_update_interval_in_ms: 100
> dynamic_snitch_reset_interval_in_ms: 60
> dynamic_snitch_badness_threshold: 0.1
> request_scheduler: org.apache.cassandra.scheduler.NoScheduler
> server_encryption_options:
> internode_encryption: none
> keystore: 

Re: C* 2.2.7 ?

2016-06-29 Thread Dominik Keil

+1

there's some bugs fixed we might be or sure are affected by and the 
change log has become quite large already mind voting von 2.2.7 soon?


Am 21.06.2016 um 15:31 schrieb horschi:

Hi,

are there any plans to release 2.2.7 any time soon?

kind regards,
Christian


--
*Dominik Keil*
Phone: + 49 (0) 621 150 207 31
Mobile: + 49 (0) 151 626 602 14

Movilizer GmbH
Konrad-Zuse-Ring 30
68163 Mannheim
Germany

--
movilizer.com

[image: Visit company website] 
*Reinvent Your Mobile Enterprise*

*-Movilizer is moving*
After June 27th 2016 Movilizer's new headquarter will be




*EASTSITE VIIIKonrad-Zuse-Ring 3068163 Mannheim*




*Be the first to know:*
Twitter  | LinkedIn 
 | Facebook 
 | stack overflow 



Company's registered office: Mannheim HRB: 700323 / Country Court: Mannheim 
Managing Directors: Alberto Zamora, Jörg Bernauer, Oliver Lesche Please 
inform us immediately if this e-mail and/or any attachment was transmitted 
incompletely or was not intelligible.


This e-mail and any attachment is for authorized use by the intended 
recipient(s) only. It may contain proprietary material, confidential 
information and/or be subject to legal privilege. It should not be 
copied, disclosed to, retained or used by any other party. If you are not 
an intended recipient then please promptly delete this e-mail and any 
attachment and all copies and inform the sender.


Re: Motivation for a DHT ring

2016-06-29 Thread jean paul
2016-06-28 22:29 GMT+01:00 jean paul :

> Hi all,
>
> Please, What is the motivation for choosing a DHT ring in cassandra? Why
> not use a normal parallel or distributed file system that supports
> replication?
>
> Thank you so much for clarification.
>
> Kind regards.
>


Problems with nodetool

2016-06-29 Thread Ralf Meier
Hi everybody,

I tried to install a cassandra cluster using docker (official image) on 6 
different machines. (Each physical machine will host one docker container). 
Each physical node has two network cards. One for an „internal network“ where 
the cassandra cluster should use for communication. (IP: 10.20.39.1 to x.x.x.6)
Through one port conflict on the host machine I had to change the port 7000 in 
cassandra,yaml to 7002 for the communication between the nodes. 

The docker containers spined up with out any issues. On each node. 

Now I tried to check if all nodes could communicate to each other by using the 
"node-tool status“ command. But always when I entered the command I
got as output only the help information how to use the node-tool.  (Even if I 
add -p 7002 it does not help)
I did not get any status about the cluster. 

So from now, I did not find anything in the logs, but I could also not check 
the status of the cluster. 

Did somebody from have an idea how to change the configuration or what I have 
to change that the cluster is working?

Thanks for your help
BR
Ralf



Attached find the configuration which where set in cassandra.yaml (from node 1 
which should also act as seed node)
cluster_name: 'TestCluster'
num_tokens: 256
max_hint_window_in_ms: 1080 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 1
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
thrift_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodi
commitlog_sync_period_in_ms: 1
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
  parameters:
  - seeds: "10.20.39.1"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7002
ssl_storage_port: 7001
listen_address: 10.20.39.1
broadcast_address: 10.20.39.1
start_rpc: false
rpc_address: 0.0.0.0
rpc_port: 9160
broadcast_rpc_address: 10.20.39.1
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 1
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 6
request_timeout_in_ms: 1
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100 
dynamic_snitch_reset_interval_in_ms: 60
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
# CBC IV length for AES needs to be 16 bytes (which is also the default 
size)
# iv_length: 16
key_provider: 
  - class_name: org.apache.cassandra.security.JKSKeyProvider
parameters: 
  - keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 10
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
 

C* files getting stuck

2016-06-29 Thread Amit Singh F
Hi All

We are running Cassandra 2.0.14 and disk usage is very high. On investigating 
it further we found that there are around 4-5 files(~ 150 GB) in stuck mode.

Command Fired : lsof /var/lib/cassandra | grep -i deleted

Output :

java 12158 cassandra 308r REG 8,16 34396638044 12727268 
/var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-16481-Data.db
 (deleted)
java 12158 cassandra 327r REG 8,16 101982374806 12715102 
/var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-126861-Data.db
 (deleted)
java 12158 cassandra 339r REG 8,16 12966304784 12714010 
/var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-213548-Data.db
 (deleted)
java 12158 cassandra 379r REG 8,16 15323318036 12714957 
/var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-182936-Data.db
 (deleted)

we are not able to see these files in any directory. This is somewhat similar 
to   
https://issues.apache.org/jira/browse/CASSANDRA-6275 which is fixed but still 
issue is there on higher version. Also in logs no error related to compaction 
is reported.

so could any one of you please provide any suggestion how to counter this. 
Restarting Cassandra is one solution but this issue keeps on occurring so we 
cannot restart production machine is not recommended so frequently.

Also we know that this version is not supported but there is high probability 
that it can occur in higher version too.
Regards
Amit Singh