High system CPU during high write workload

2016-11-14 Thread Abhishek Gupta
Hi,

We are seeing an issue where the system CPU is shooting off to a figure or
> 90% when the cluster is subjected to a relatively high write workload i.e
4k wreq/secs.

2016-11-14T13:27:47.900+0530 Process summary
  process cpu=695.61%
  application cpu=676.11% (*user=200.63% sys=475.49%) **<== Very High
System CPU *
  other: cpu=19.49%
  heap allocation rate *403mb*/s
[000533] user= 1.43% sys= 6.91% alloc= 2216kb/s - SharedPool-Worker-129
[000274] user= 0.38% sys= 7.78% alloc= 2415kb/s - SharedPool-Worker-34
[000292] user= 1.24% sys= 6.77% alloc= 2196kb/s - SharedPool-Worker-56
[000487] user= 1.24% sys= 6.69% alloc= 2260kb/s - SharedPool-Worker-79
[000488] user= 1.24% sys= 6.56% alloc= 2064kb/s - SharedPool-Worker-78
[000258] user= 1.05% sys= 6.66% alloc= 2250kb/s - SharedPool-Worker-41

On doing strace it was found that the following system call is consuming
all the system CPU
 timeout 10s strace -f -p 5954 -c -q
% time seconds  usecs/call callserrors syscall
-- --- --- - - 

*88.33 1712.798399   16674102723 22191 futex* 3.98   77.098730
   4356 17700   read
 3.27   63.474795  394253   16129 restart_syscall
 3.23   62.601530   29768  2103   epoll_wait

On searching we found the following bug with the RHEL 6.6, CentOS 6.6
kernel seems to be a probable cause for the issue:

https://docs.datastax.com/en/landing_page/doc/landing_page/
troubleshooting/cassandra/fetuxWaitBug.html

The patch fix mentioned in the doc is also not present in our kernel.

sudo rpm -q --changelog kernel-`uname -r` | grep futex | grep ref
- [kernel] futex_lock_pi() key refcnt fix (Danny Feng) [566347]
{CVE-2010-0623}

Can some who has faced and resolved this issue help us here.

Thanks,
Abhishek


Re: [Cassandra 3.0.9 ] Disable “delete/Truncate/Drop”

2017-04-04 Thread Abhishek Gupta
Hi Abhishek,

Truncate is very much a part of CQL and it does exactly what the name
suggests i.e truncating/deleting all the rows of the table.

TRUNCATE sends a JMX command to all nodes, telling them to delete SSTables
that hold the data from the specified table. If any of these nodes is down
or doesn't respond, the command fails. Hence, it is important to ensure the
command is run using CONSISTENCY ALL and when all the nodes are up. (use
nodetool status)

Please see
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlTruncate.html for
further details.

Thanks,
Abhishek


On Tue, Apr 4, 2017 at 1:58 PM, Abhishek Kumar Maheshwari <
abhishek.maheshw...@timesinternet.in> wrote:

> Hi all,
>
>
>
> There is any way to disable “delete/Truncate/Drop” command on Cassandra?
>
>
>
> If yes then how we can implement this?
>
>
>
> *Thanks & Regards,*
> *Abhishek Kumar Maheshwari*
> *+91- 805591 (Mobile)*
>
> Times Internet Ltd. | A Times of India Group Company
>
> FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
>
> *P** Please do not print this email unless it is absolutely necessary.
> Spread environmental awareness.*
>
>
> The “Times Cartoonist Hunt” is now your chance to be the next legendary
> cartoonist. Send us 2 original cartoons, one on current affairs and the
> second on any subject of your choice. All entries must be uploaded on
> www.toicartoonisthunt.com by 5th April 2017. Alternatively, you can email
> your entries at toicarto...@gmail.com with your Name, Age, City and
> Mobile number. Gear up, the Hunt has begun!
>


Re: Apache Cassandra - Configuration Management

2017-05-17 Thread Abhishek Gupta
Hi Zaidi,

We use Chef for the configuration management of our 14 node cluster.

You can have a look at Chef or maybe some other config management tools too
like Ansible and Puppet.



Thanks,
Abhishek

On May 17, 2017 10:08 PM, "DuyHai Doan"  wrote:

> For configuration management there are tons of tools out there:
>
> - ansible
> - chef
> - puppet
> - saltstack
>
> I surely forgot a few others
>
>
> On Wed, May 17, 2017 at 6:33 PM, ZAIDI, ASAD A  wrote:
>
>> Good Morning Folks –
>>
>>
>>
>> I’m running 14 nodes Cassandra cluster in two data centers , each node is
>> has roughly 1.5TB. we’re anticipating more load therefore we’ll be
>> expanding cluster with additional nodes.
>>
>> At this time, I’m kind of struggling to keep consistent cassandra.yaml
>> file on each server – at this time, I’m maintaining yaml file manually. The
>> only tool I’ve is splunk  which is only to ‘monitor‘ threads.
>>
>>
>>
>> Would you guy please suggest  open source tool that can help maintain the
>> cluster. I’ll really appreciate your reply – Thanks/Asad
>>
>
>