Re: How to find which table partitions having the more reads per sstables ?

2020-03-16 Thread Léo FERLIN SUTTON
tps://docs.datastax.com/en/dse/6.7/dse-admin/datastax_enterprise/operations/opsOpscenterDashboardMetrics.html Regards, Leo On Mon, Mar 16, 2020 at 11:29 AM Léo FERLIN SUTTON wrote: > I'm sure there is a way to find it in Opscenter but I've never used it so > I don't know. > > The easi

Re: How to find which table partitions having the more reads per sstables ?

2020-03-16 Thread Léo FERLIN SUTTON
Is there any options to find in Opscenter too ? > > Best Regards, > Kiran.M.K. > > On Mon, Mar 16, 2020 at 2:20 PM Léo FERLIN SUTTON > wrote: > > > > You can look up this Mbean : SSTablesPerReadHistogram (via jmx) > > > > You will have one metric per table,

Re: How to find which table partitions having the more reads per sstables ?

2020-03-16 Thread Léo FERLIN SUTTON
You can look up this Mbean : SSTablesPerReadHistogram (via jmx) You will have one metric per table, try to find the biggest one. You can find more info here : http://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics On Mon, Mar 16, 2020 at 9:11 AM Kiran mk wrote: > Hi All, >

Re: Uneven token distribution with allocate_tokens_for_keyspace

2020-01-24 Thread Léo FERLIN SUTTON
Hi Anthony ! I have a follow-up question : Check to make sure that no other node in the cluster is assigned any of the > four tokens specified above. If there is another node in the cluster that > is assigned one of the above tokens, increment the conflicting token by > values of one until no

Re: Medusa : a new OSS backup/restore tool for Apache Cassandra

2019-11-07 Thread Léo FERLIN SUTTON
Seems great ! Thank you :) We had tried https://github.com/GoogleCloudPlatform/cassandra-cloud-backup but it was not satisfying, looking forward to try medusa ! On Thu, Nov 7, 2019 at 4:35 PM Ahmed Eljami wrote: > Thanks for open-sourcing your work, TLP ! > > cassandra-reaper and now Medusa,

Re: Sizing a cluster

2019-10-01 Thread Léo FERLIN SUTTON
Hi ! I'm not an expert but don't forget that cassandra needs space to do it's compactions. Take a look at the worst case scenarios from this datastax grid : https://docs.datastax.com/en/dse-planning/doc/planning/capacityPlanning.html#capacityPlanning__disk > The size of a picture + data is

Re: How to delete huge partition in cassandra 3.0.13

2019-08-28 Thread Léo FERLIN SUTTON
So you have deleted the partition. Do not delete the sstables directly. By default cassandra will keep the tombstones untouched for 10 days. Once 10 days have passed (should be done now since your message was on august 12) a compaction is needed to actually reclaim the data. You could force a

Re: Tombstones not getting purged

2019-06-20 Thread Léo FERLIN SUTTON
o that's >> probably why your tombstones are sticking around. >> >> Your best shot here will be a major compaction of that table, since it >> doesn't seem so big. Remember to use the --split-output flag on the >> compaction command to avoid ending up with a single SSTable after t

Re: Tombstones not getting purged

2019-06-20 Thread Léo FERLIN SUTTON
sful attempts (too long and too much disk space used, so abandoned), and we are currently trying to tweak the compaction parameters to speed things up. Regards. Leo On Thu, Jun 20, 2019 at 7:02 AM Jeff Jirsa wrote: > >> Probably overlapping sstables >> >> Which comp

Tombstones not getting purged

2019-06-19 Thread Léo FERLIN SUTTON
I have used the following command to check if I had droppable tombstones : `/usr/bin/sstablemetadata --gc_grace_seconds 259200 /var/lib/cassandra/data/stats/tablename/md-sstablename-big-Data.db` I checked every sstable in a loop and had 4 sstables with droppable tombstones : ``` Estimated

Re: Speed up compaction

2019-06-13 Thread Léo FERLIN SUTTON
On Thu, Jun 13, 2019 at 2:44 PM Oleksandr Shulgin < oleksandr.shul...@zalando.de> wrote: > On Thu, Jun 13, 2019 at 2:07 PM Léo FERLIN SUTTON > wrote: > >> >> Overall we are talking about a 1.08TB table, using LCS. >> >> SSTable count: 1047 >>> SS

Re: very slow repair

2019-06-13 Thread Léo FERLIN SUTTON
> > Last, but not least: are you using the default number of vnodes, 256? The > overhead of large number of vnodes (times the number of nodes), can be > quite significant. We've seen major improvements in repair runtime after > switching from 256 to 16 vnodes on Cassandra version 3.0. Is there

Re: Speed up compaction

2019-06-13 Thread Léo FERLIN SUTTON
On Thu, Jun 13, 2019 at 12:09 PM Oleksandr Shulgin < oleksandr.shul...@zalando.de> wrote: > On Thu, Jun 13, 2019 at 11:28 AM Léo FERLIN SUTTON > wrote: > >> >> ## Cassandra configuration : >> 4 concurrent_compactors >> Current compaction throughput: 150

Speed up compaction

2019-06-13 Thread Léo FERLIN SUTTON
I am currently noticing very very slow compactions on my cluster and wondering if there is any way to speed things up. Right now I have this compaction currently running : 60c1cfc0-8da7-11e9-bc08-3546c703a280Compaction keyspace1 > table1 8.77 GB 1.71 TB bytes 0.50%

Re: SStable format change in 3.0.18 ?

2019-04-04 Thread Léo FERLIN SUTTON
ring > > Dmitry Saprykin > > On Thu, Apr 4, 2019 at 11:23 AM Léo FERLIN SUTTON > wrote: > >> Hello ! >> >> I have noticed something since I upgraded to cassandra 3.0.18. >> >> Before all my Sstable used to be named this way : >> ``` >> mc

SStable format change in 3.0.18 ?

2019-04-04 Thread Léo FERLIN SUTTON
Hello ! I have noticed something since I upgraded to cassandra 3.0.18. Before all my Sstable used to be named this way : ``` mc-130817-big-CompressionInfo.db mc-130817-big-Data.db mc-130817-big-Digest.crc32 mc-130817-big-Filter.db mc-130817-big-Index.db mc-130817-big-Statistics.db

Re: Query failure

2019-03-14 Thread Léo FERLIN SUTTON
across all > nodes in the cluster. The responses you’re seeing are totally indicative of > being connected to a node where PasswordAuthenticator is not enabled in > cassandra.yaml. > > Thanks, > Sam > > On 14 Mar 2019, at 10:56, Léo FERLIN SUTTON > wrote: > > Hello ! > &

Query failure

2019-03-14 Thread Léo FERLIN SUTTON
Hello ! Recently I have noticed some clients are having errors almost every time they try to contact my Cassandra cluster. The error messages vary but there is one constant : *It's not constant* ! Let me show you : >From the client host : `cqlsh --cqlversion "3.4.0" -u cassandra_superuser -p

Re: Bootstrap keeps failing

2019-03-12 Thread Léo FERLIN SUTTON
, Feb 14, 2019 at 7:41 PM Léo FERLIN SUTTON wrote: > On Thu, Feb 14, 2019 at 6:56 PM Kenneth Brotman > wrote: > >> Those aren’t the same error messages so I think progress has been made. >> >> >> >> What version of C* are you running? >> > 3.0.17 We wil

Re: Bootstrap keeps failing

2019-02-14 Thread Léo FERLIN SUTTON
ology changes to cleanup. `nodetool cleanup` did miracles. Regards, Leo > > *From:* Léo FERLIN SUTTON [mailto:lfer...@mailjet.com.INVALID] > *Sent:* Thursday, February 14, 2019 7:54 AM > *To:* user@cassandra.apache.org > *Subject:* Re: Bootstrap keeps failing > > > > Hello a

Re: Bootstrap keeps failing

2019-02-14 Thread Léo FERLIN SUTTON
nup` on our most "critical" nodes to see > if it helps. If that doesn't do the trick we will only have two solutions : > >- Add more disk space on each node >- Adding new nodes > > We have looked at some other companies case studies and it looks like

Re: Bootstrap keeps failing

2019-02-08 Thread Léo FERLIN SUTTON
d nodes, and are hoping to eventually transition to a "lot of small nodes" model and be able to add nodes a lot faster. Thank you again for your interest, Regards, Leo > *From:* Léo FERLIN SUTTON [mailto:lfer...@mailjet.com.INVALID] > *Sent:* Friday, February 08, 2019 6:16 AM > *T

Re: Bootstrap keeps failing

2019-02-08 Thread Léo FERLIN SUTTON
bootstrap resume` on the instance. Thank you for you interest in our issue ! Regards, Leo > > > *From:* Léo FERLIN SUTTON [mailto:lfer...@mailjet.com.INVALID] > *Sent:* Thursday, February 07, 2019 9:16 AM > *To:* user@cassandra.apache.org > *Subject:* Re: [EXTERNAL] Re: B

Re: [EXTERNAL] Re: Bootstrap keeps failing

2019-02-07 Thread Léo FERLIN SUTTON
o the intermediate network equipment. > > > > Sean Durity > > *From:* Léo FERLIN SUTTON > *Sent:* Thursday, February 07, 2019 10:26 AM > *To:* user@cassandra.apache.org; dinesh.jo...@yahoo.com > *Subject:* [EXTERNAL] Re: Bootstrap keeps failing > > > > Hello

Bootstrap keeps failing

2019-02-06 Thread Léo FERLIN SUTTON
Hello ! I am having a recurrent problem when trying to bootstrap a few new nodes. Some general info : - I am running cassandra 3.0.17 - We have about 30 nodes in our cluster - All healthy nodes have between 60% to 90% used disk space on /var/lib/cassandra So I create a new node and

Re: Why and How is Cassandra using all my ram ?

2018-07-26 Thread Léo FERLIN SUTTON
Hello again, > It's possible that glibc is creating too many memory arenas. Are you > setting/exporting MALLOC_ARENA_MAX to something sane before calling > the JVM? You can check that in /proc//environ. I checked and we have the default value of 4. > I would also turn on

Re: Why and How is Cassandra using all my ram ?

2018-07-24 Thread Léo FERLIN SUTTON
On Tue, Jul 24, 2018 at 4:04 AM, Dennis Lovely wrote: > you define the max size of your heap (-Xmx), but you do not define the max > size of your offheap (MaxMetaspaceSize for jdk 8, PermSize for jdk7), so you > could occupy all of the memory on the instance. Yes I think we should set up a

Re: Why and How is Cassandra using all my ram ?

2018-07-24 Thread Léo FERLIN SUTTON
On Mon, Jul 23, 2018 at 11:44 PM, Mark Rose wrote: > Hi Léo, > > It's possible that glibc is creating too many memory arenas. Are you > setting/exporting MALLOC_ARENA_MAX to something sane before calling > the JVM? You can check that in /proc//environ. > I have checked and the MALLOC_ARENA_MAX

Why and How is Cassandra using all my ram ?

2018-07-19 Thread Léo FERLIN SUTTON
Hello list ! I have a question about cassandra memory usage. My cassandra nodes are slowly using up all my ram until they get OOM-Killed. When I check the memory usage with nodetool info the memory (off-heap+heap) doesn't match what the java process is really using. I tried to use pmap to see