Re: Convert single node C* to cluster (rebalancing problem)

2017-06-14 Thread Affan Syed
John, I am a co-worker with Junaid -- he is out sick, so just wanted to confirm that one of your shots in the dark is correct. This is a RF of 1x "CREATE KEYSPACE orion WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;" However, how does the

Re: Apache Cassandra - Memory usage on server

2017-06-14 Thread Thakrar, Jayesh
Asad, The rest of the 42 GB of memory on your server is used by the filesystem buffer cache - see the "cached" column and the -/+ buffers/cache line. The OS (Linux) uses all free memory for filesystem buffer cache and if applications need memory, will relinquish it appropriately. To see the

Re: Question: Large partition warning

2017-06-14 Thread Fay Hou [Data Pipeline & Real-time Analytics] ­
you really should keep partition size under 100M On Wed, Jun 14, 2017 at 7:28 PM, Thakrar, Jayesh < jthak...@conversantmedia.com> wrote: > Thank you Kurt - that makes sense. > Will certainly reduce it to 1024. > > Greatly appreciate your quick reply. > > Thanks, > > Jayesh > > > > From: kurt

Re: Bottleneck for small inserts?

2017-06-14 Thread Eric Pederson
Using cassandra-stress with the out of the box schema I am seeing around 140k rows/second throughput using 1 client on each of 3 client machines. On the servers: - CPU utilization: 43% usr/20% sys, 55%/28%, 70%/10% (the last number is the older box) - Inbound network traffic: 174 Mbps,

Re: Question: Large partition warning

2017-06-14 Thread Thakrar, Jayesh
Thank you Kurt - that makes sense. Will certainly reduce it to 1024. Greatly appreciate your quick reply. Thanks, Jayesh From: kurt greaves Sent: Wednesday, June 14, 5:53 PM Subject: Re: Question: Large partition warning To: Fay Hou [Data Pipeline & Real-time Analytics] ­ Cc: Thakrar,

RE: Question: Large partition warning

2017-06-14 Thread ZAIDI, ASAD A
Check partition size of your table’s partitions ( nodetool tablehistograms tsg_ae logs_by_user). You may need to reduce partition size of your table using guidelines given in [ http://docs.datastax.com/en/archived/cassandra/2.2/cassandra/planning/planPlanningPartitionSize.html] &

Reaper v0.6.1 released

2017-06-14 Thread Jonathan Haddad
Hey folks! I'm proud to announce the 0.6.1 release of the Reaper project, the open source repair management tool for Apache Cassandra. This release improves the Cassandra backend significantly, making it a first class citizen for storing repair schedules and managing repair progress. It's no

Apache Cassandra - Memory usage on server

2017-06-14 Thread ZAIDI, ASAD A
Hi folks, I’m using apache Cassandra 2.2. Instance is configured with max_heap_size set at 16G, memtable_allocation_type is offheap_objects – total available memory is 62G on the server. There is nothing but Cassandra is running on my Linux server. My Cassandra instance is consuming all

Re: Question: Large partition warning

2017-06-14 Thread kurt greaves
Looks like you've hit a bug (not the first time I've seen this in relation to C* configs). compaction_large_partition_warning_threshold_mb resolves to an int, and in the codebase is represented in bytes. 4096 * 1024 * 1024 and you've got some serious overflow. Granted, you should have this warning

Correct ways to use Nodetool JMX Classes in Seperate Process

2017-06-14 Thread Nathan Jackels
Hi all, A project I'm working on right now requires that a daemon/service running on the same host as Cassandra be able to connect via JMX for many of the same functions as nodetool and sstablemetadata. The classpath that nodetool uses includes all the jars in cassandra/lib, so we are using the

Re: Question: Large partition warning

2017-06-14 Thread Fay Hou [Data Pipeline & Real-time Analytics] ­
nodetool tablehistograms tsg_ae/logs_by_user will give you an idea about the estimate of the partition size. It is recommended that the partition size is not over 100MB Large partitions are also creating heap pressure during compactions, which will issue warnings in the logs (look for "large

Question: Large partition warning

2017-06-14 Thread Thakrar, Jayesh
We are on Cassandra 2.2.5 and I am constantly seeing warning messages about large partitions in system.log even though our setting for partition warning threshold is set to 4096 (MB). WARN [CompactionExecutor:43180] 2017-06-14 20:02:13,189 BigTableWriter.java:184 - Writing large partition

Re: Node replacement strategy with AWS EBS

2017-06-14 Thread Hannu Kröger
Hi, So, if it works, great. auto_bootstrap false is not needed when you have system keyspace as also mentioned in the article. Now you are likely to have different tokens then the previous node (unless those were manually configured to match the old node) and repair and cleanup are needed to get

Re: Node replacement strategy with AWS EBS

2017-06-14 Thread Rutvij Bhatt
Thanks again for your help! To summarize for anyone who stumbles onto this in the future, this article covers the procedure well: https://www.eventbrite.com/engineering/changing-the-ip-address-of-a-cassandra-node-with-auto_bootstrapfalse/ It is more or less what Hannu suggested. I carried out

Upgrade from 3.0.6, where's the documentation?

2017-06-14 Thread Riccardo Ferrari
Hi list, It's been a while since I upgraded my C* to 3.0.6, nevertheless I would like to give TWCS a try (avaialble since 3.0.7). What happened to the upgrade documentation ? I was used to read some step-by-step procedure from datastax but looks like they are not supporting it anymore, on the

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread wxn...@zjqunshuo.com
Thanks for the detail explanation. You did solve my problem. Cheers, -Simon From: Oleksandr Shulgin Date: 2017-06-14 17:09 To: wxn...@zjqunshuo.com CC: user Subject: Re: Cannot achieve consistency level LOCAL_ONE On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread Oleksandr Shulgin
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com wrote: > Thanks for the reply. > My system_auth settings is as below and what should I do with it? And I'm > interested why the newly added node is responsible for the user > authentication? > > CREATE KEYSPACE

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread wxn...@zjqunshuo.com
Thanks for the reply. My system_auth settings is as below and what should I do with it? And I'm interested why the newly added node is responsible for the user authentication? CREATE KEYSPACE system_auth WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread Oleksandr Shulgin
On Wed, Jun 14, 2017 at 9:11 AM, wxn...@zjqunshuo.com wrote: > Hi, > Cluster set up: > 1 DC with 5 nodes (each node having 700GB data) > 1 kespace with RF of 2 > write CL is LOCAL_ONE > read CL is LOCAL_QUORUM > > One node was down for about 1 hour because of OOM issue.

Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread wxn...@zjqunshuo.com
Hi, Cluster set up: 1 DC with 5 nodes (each node having 700GB data) 1 kespace with RF of 2 write CL is LOCAL_ONE read CL is LOCAL_QUORUM One node was down for about 1 hour because of OOM issue. During the down period, all 4 other nodes report "Cannot achieve consistency level LOCAL_ONE"