the
affected ones.
it would cost some time and money but better than having a node tool not
working
Best,
Sergio
Il giorno dom 26 feb 2023 alle ore 10:51 Abe Ratnofsky ha
scritto:
> Hey Mitch,
>
> The security upgrade schedule that your colleague is working on may well
> be relevant. Is
However, if I were you I would avoid that... Maybe I will place a url to S3
or GFS in Cassandra
Best,
Sergio
On Tue, May 31, 2022, 4:10 PM Sergio wrote:
> You have to split it by yourself
> Best,
> Sergio
>
> On Tue, May 31, 2022, 3:56 PM Andria Trigeorgis
> wrote:
>
You have to split it by yourself
Best,
Sergio
On Tue, May 31, 2022, 3:56 PM Andria Trigeorgis
wrote:
> Thank you for your prompt reply!
> So, I have to split the blob into chunks by myself, or there is any
> fragmentation mechanism in Cassandra?
>
>
> On 31 May 2022, at 4:44 P
Sorry for the dumb question:
When we refer to 1000 nodes divided in 10 clusters(shards): we would have
100 nodes per cluster
A shard is not intended as Datacenter but it would be a cluster itself that
it doesn't talk with the other ones so there should be some routing logic
at the application
The problem is that folder is not under snapshot but it is under the data
path.
I tried with the --all switch too
Thanks,
Sergio
On Thu, Apr 30, 2020, 4:21 PM Nitan Kainth wrote:
> I don't think it works like that. clearsnapshot --all would remove all
> snapshots. Here is an example:
&g
that removes the
column_family folder for each node.
I tried the nodetool clearsnapshot command but it didn't work and when I try to
nodetool listsnapshots I don't see anything. It is like hidden that space
occupied.
Any suggestion?
Thanks,
Sergio
Hi Erick!
Just follow up to your statement:
Limiting the seeds to 2 per DC means :
A) Each node in a DC has at least 2 seeds and those seeds belong to the
same DC
or
B) Each node in a DC has at least 2 seeds even across different DC
Thanks,
Sergio
Il giorno gio 13 feb 2020 alle ore 19:46
do you use
batch then?
Best,
Sergio
On Thu, Feb 20, 2020, 6:18 PM Erick Ramirez
wrote:
> Batches aren't really meant for optimisation in the same way as RDBMS. If
> anything, it will just put pressure on the coordinator having to fire off
> multiple requests to lots of replicas. The IN
in the IN
STATEMENT
OR
HANDLE WITH A CASSANDRA BATCH QUERY and in particular, I was looking at
https://docs.spring.io/spring-data/cassandra/docs/current/api/org/springframework/data/cassandra/core/ReactiveCassandraBatchOperations.html#delete-java.lang.Iterable-
Thanks,
Sergio
I really like these conversations. So feel free to continue this one or
create a new one Thanks to everyone participating :)
Il giorno dom 16 feb 2020 alle ore 14:04 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> No actually in this case I didn’t really have an opinion because C* is
Thank you for the advices!
Best!
Sergio
On Thu, Feb 13, 2020, 7:44 PM Erick Ramirez
wrote:
> Option 1 is a cheaper option because the cluster doesn't need to rebalance
> (with the loss of a replica) post-decommission then rebalance again when
> you add a new node.
>
> The
Thank you very much for this helpful information!
I opened a new thread for the other question :)
Sergio
Il giorno gio 13 feb 2020 alle ore 19:22 Erick Ramirez <
erick.rami...@datastax.com> ha scritto:
> I want to have more than one seed node in each DC, so unless I don't
>> r
- Decommission the one that is going to be retired
- Run cleanup with cstar across the datacenters
?
Thanks,
Sergio
restart even across DCs or the restart should be
happening only on the new node that I want as a seed?
The reason each Datacenter has:
a seed from the current DC belongs to and a seed from the other DC.
Thanks,
Sergio
Il giorno gio 13 feb 2020 alle ore 18:41 Erick Ramirez <
erick.r
?
Thanks,
Sergio
Il giorno gio 13 feb 2020 alle ore 18:15 Erick Ramirez <
erick.rami...@datastax.com> ha scritto:
> I did decommission of this node and I did all the steps mentioned except
>> the -Dcassandra.replace_address and now it is streaming correctly!
>
>
> That wo
deactivated the future repair happening in the cluster while this node is
joining.
When you add a node is it better to stop the repair process?
Thank you very much Erick!
Best,
Sergio
Il giorno gio 13 feb 2020 alle ore 17:52 Erick Ramirez <
erick.rami...@datastax.com> ha scritto:
> Sh
Thanks for your fast reply!
No repairs are running!
https://cassandra.apache.org/doc/latest/faq/index.html#does-single-seed-mean-single-point-of-failure
I added the node IP itself and the IP of existing seeds and I started
Cassandra.
So the right procedure is not to add in the seed list the
Should I do something to fix it or leave as it?
On Thu, Feb 13, 2020, 5:29 PM Jon Haddad wrote:
> Seeds don't bootstrap, don't list new nodes as seeds.
>
> On Thu, Feb 13, 2020 at 5:23 PM Sergio wrote:
>
>> Hi guys!
>>
>> I don't know how but this is the first ti
,
Sergio
/
Thanks,
Sergio
Il giorno mer 12 feb 2020 alle ore 10:58 Durity, Sean R <
sean_r_dur...@homedepot.com> ha scritto:
> Check the readme.txt for any upgrade notes, but the basic procedure is to:
>
>- Verify that nodetool upgradesstables has completed successfully on
>
Should I follow the steps above right?
Thanks Erick!
On Wed, Feb 12, 2020, 6:58 PM Erick Ramirez
wrote:
> In case you have an hybrid situation with 3.11.3 , 3.11.4 and 3.11.5 that
>> it is working and it is in production what do you recommend?
>
>
> You shouldn't end up in this mixed-version
Thanks everyone!
In case you have an hybrid situation with 3.11.3 , 3.11.4 and 3.11.5 that
it is working and it is in production what do you recommend?
On Wed, Feb 12, 2020, 5:55 PM Erick Ramirez
wrote:
> So unless the sstable format has not been changed I can avoid to do that.
>
>
> Just to
/thread.html/r21cd99fa269076d186a82a8b466eb925681373302dd7aa6bb26e5bde%40%3Cuser.cassandra.apache.org%3E
Best,
Sergio
Il giorno mer 12 feb 2020 alle ore 11:42 Durity, Sean R <
sean_r_dur...@homedepot.com> ha scritto:
> >>A while ago, on my first cluster
>
>
>
> Understateme
Thanks for your reply!
So unless the sstable format has not been changed I can avoid to do that.
Correct?
Best,
Sergio
On Wed, Feb 12, 2020, 10:58 AM Durity, Sean R
wrote:
> Check the readme.txt for any upgrade notes, but the basic procedure is to:
>
>- Verify that
I define the
contact points I can specify any node in the cluster as contact point and
not necessary a seed node?
Best,
Sergio
On Wed, Feb 12, 2020, 9:08 AM Arvinder Dhillon
wrote:
> I believe seed nodes are not special nodes, it's just that you choose a
> few nodes from cluster that
Hi guys!
Is there a way to promote a not seed node to a seed node?
If yes, how do you do it?
Thanks!
Hi guys!
How do you usually upgrade your cluster for minor version upgrades?
I tried to add a node with 3.11.5 version to a test cluster with 3.11.4
nodes.
Is there any restriction?
Best,
Sergio
Do you have any chance to take a look about this one?
Il giorno lun 3 feb 2020 alle ore 23:36 Sergio
ha scritto:
> After reading this
>
> *I would only consider moving a cluster to 4 tokens if it is larger than
> 100 nodes. If you read through the paper that Erick mentioned, writ
Another option is the DSE-bulk loader but it will require to convert to
csv/json (good option if you don't like to play with sstableloader and deal
to get all the sstables from all the nodes)
https://docs.datastax.com/en/dsbulk/doc/index.html
Cheers
Sergio
Il giorno mer 5 feb 2020 alle ore 16
://docs.datastax.com/en/cassandra-oss/3.x/cassandra/tools/toolsStatus.html
Il giorno lun 3 feb 2020 alle ore 23:43 Sergio
ha scritto:
> Thanks, Erick!
>
> I thought that the snapshot size was not counted in the load.
>
> Il giorno lun 3 feb 2020 alle ore 23:24 Erick Ramirez <
> fli
Thanks, Erick!
I thought that the snapshot size was not counted in the load.
Il giorno lun 3 feb 2020 alle ore 23:24 Erick Ramirez
ha scritto:
> Why the df -h and du -sh shows a big discrepancy? nodetool load is it
>> computed with df -h?
>>
>
> In Linux terms, df reports the filesystem disk
separating the Datacenter for reads from the one that
handles the writes...
Thanks for your help!
Sergio
Il giorno dom 2 feb 2020 alle ore 18:36 Anthony Grasso <
anthony.gra...@gmail.com> ha scritto:
> Hi Sergio,
>
> There is a misunderstanding here. My post makes no recom
Hello!
I was trying to understand the below differences:
Cassandra 3.11.4
i3xlarge aws nodes
$ du -sh /mnt
123G/mnt
$ nodetool info
ID : 3647fcca-688a-4851-ab15-df36819910f4
Gossip active : true
Thrift active : true
Native Transport active: true
Load
Thanks Erick!
Best,
Sergio
On Sun, Feb 2, 2020, 10:07 PM Erick Ramirez wrote:
> If you are after more details about the trade-offs between different sized
>> token values, please see the discussion on the dev mailing list: "[Discuss]
>> num_tokens default in Cassandra 4.0
Thanks Anthony!
I will read more about it
Best,
Sergio
Il giorno dom 2 feb 2020 alle ore 18:36 Anthony Grasso <
anthony.gra...@gmail.com> ha scritto:
> Hi Sergio,
>
> There is a misunderstanding here. My post makes no recommendation for the
> value of num_tokens. Rather,
https://thelastpickle.com/blog/2019/02/21/set-up-a-cluster-with-even-token-distribution.html
This
is the article with 4 token recommendations.
@Erick Ramirez. which is the dev thread for the default 32 tokens
recommendation?
Thanks,
Sergio
Il giorno ven 31 gen 2020 alle ore 14:49 Erick Ramirez
ame as sstableloder.
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On Jan 24, 2020, at 10:40 AM, Sergio wrote:
>
>
> I was wondering if that improvement for token allocation would work even
> with just one rack. It should but I am not sure.
>
> Does Dsb
I was wondering if that improvement for token allocation would work even
with just one rack. It should but I am not sure.
Does Dsbulk support migration cluster to cluster without CSV or JSON export?
Thanks and Regards
On Fri, Jan 24, 2020, 8:34 AM Nitan Kainth wrote:
> Instead of
Thanks for the explanation. It should deserve a blog post
Sergio
On Wed, Jan 22, 2020, 1:22 PM Reid Pinchback
wrote:
> The reaper logs will say if nodes are being skipped. The web UI isn’t
> that good at making it apparent. You can sometimes tell it is likely
> happening when you
Thank you very much for your extended response.
Should I look in the log some particular message to detect such behavior?
How do you tune it ?
Thanks,
Sergio
On Wed, Jan 22, 2020, 12:59 PM Reid Pinchback
wrote:
> Kinda. It isn’t that you have to repair twice per se, j
with i3xlarge nodes.
Thanks,
Sergio
Il giorno mer 22 gen 2020 alle ore 08:28 Sergio
ha scritto:
> Thank you very much! Yes I am using reaper!
>
> Best,
>
> Sergio
>
> On Wed, Jan 22, 2020, 8:00 AM Reid Pinchback
> wrote:
>
>> Sergio, if you’re looking for a new f
Thank you very much! Yes I am using reaper!
Best,
Sergio
On Wed, Jan 22, 2020, 8:00 AM Reid Pinchback
wrote:
> Sergio, if you’re looking for a new frequency for your repairs because of
> the change, if you are using reaper, then I’d go for repair_freq <=
> gc_grace / 2.
Thank you very much for your response.
The considerations mentioned are the ones that I was expecting.
I believe that I am good to go.
I just wanted to make sure that there was no need to run any other extra
command beside that one.
Best,
Sergio
On Tue, Jan 21, 2020, 3:55 PM Jeff Jirsa wrote
https://stackoverflow.com/a/22030790
For CQLSH
alter table with GC_GRACE_SECONDS = ;
Il giorno mar 21 gen 2020 alle ore 13:12 Sergio
ha scritto:
> Hi guys!
>
> I just wanted to confirm with you before doing such an operation. I expect
> to increase the space but nothing more
Hi guys!
I just wanted to confirm with you before doing such an operation. I expect
to increase the space but nothing more than this. I need to perform just :
UPDATE COLUMN FAMILY cf with GC_GRACE = 691,200; //8 days
Is it correct?
Thanks,
Sergio
BulkLoader finds application when you want a slice of the
data and not the entire cake. Is it correct?
Thanks,
Sergio
?
The latency computed by the cassandra-stress-tool should almost match the
latency shown by the JMX metrics or not?
Which one do you monitor ClientRequest metrics or Table metrics or ColumnFamily?
I am going to create my Grafana dashboard and explain how I configured it.
Best,
Sergio
riting large partitions
> during compaction.
>
>
>
>
>
> On Thu, Nov 21, 2019 at 6:33 PM Sergio Bilello
> wrote:
>
> > Hi guys!
> > Just for curiosity do you know anything beside
> > https://github.com/tolbertam/sstable-
Hi guys!
Just for curiosity do you know anything beside
https://github.com/tolbertam/sstable-tools to find a large partition?
Best,
Sergio
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands
with thousands connection opened the load is below 3 in a 4
CPU machine and the latency is good.
Thanks and have a great weekend
Sergio
Il giorno ven 1 nov 2019 alle ore 07:56 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> Hi Sergio,
>
>
>
> I’m definitely no
on your
workload.
What's your experience of running Cassandra in k8s. Are you using the
Cassandra Kubernetes Operator?
How do you monitor it and how do you perform disaster recovery backup?
Best,
Sergio
Il giorno ven 1 nov 2019 alle ore 14:14 Ben Mills ha
scritto:
> Thanks Sergio - that's g
In any case I would test with tlp-stress or Cassandra stress tool any
configuration
Sergio
On Fri, Nov 1, 2019, 12:31 PM Ben Mills wrote:
> Greetings,
>
> We are planning a Cassandra upgrade from 3.7 to 3.11.5 and considering a
> change to the GC config.
>
> What is
OOO but still relevant:
Would not it be possible to create an Amazon AMI that has all the OS and
JVM settings in the right place and from there each developer can tweak the
things that need to be adjusted?
Best,
Sergio
Il giorno gio 31 ott 2019 alle ore 12:56 Abdul Patel
ha scritto:
> Lo
ked value and I used the values recommended from datastax.
Do you have something different?
Best,
Sergio
Il giorno mer 30 ott 2019 alle ore 13:27 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> Oh nvm, didn't see the later msg about just posting what your fix was.
>
https://docs.datastax.com/en/drivers/java/2.2/com/datastax/driver/core/policies/LatencyAwarePolicy.html
I had to change the Policy in the Cassandra Driver. I solved this problem few
weeks ago. I am just posting the solution for anyone that could hit the same
issue.
Best,
Sergio
On 2019/10/17
Rolling bounce = Rolling repair per node? Would not it be easy to be scheduled
with Cassandra Reaper?
On 2019/10/29 15:35:42, Paul Carlucci wrote:
> Copy the schema from your source keyspace to your new target keyspace,
> nodetool snapshot on your source keyspace, copy the SSTable files over,
I have a COLUMN Family in a Keyspace with Replication Factor = 3.
The client reads it with LOCAL_QUORUM. Does this mean that all the reads
should kick a read_repair or not?
Are these parameters meaningful only with LOCAL_ONE or ONE Consistency then?
I have also an application that translates some
It disappeared from describecluster after 1 day. It is only in gossipinfo now
and this looks to be ok :)
On 2019/10/25 04:01:03, Sergio wrote:
> Hi guys,
>
> Cassandra 3.11.4
>
> nodetool gossipinfo
> /10.1.20.49
> generation:1571694191
> heartbeat:27980
://grokbase.com/t/cassandra/user/162gwp6pz6/decommissioned-nodes-shows-up-in-nodetool-describecluster-as-unreachable-in-2-1-12-version
Is there something that I should do to fix this?
Best,
Sergio
Thanks Reid!
I agree with all the things that you said!
Best,
Sergio
Il giorno gio 24 ott 2019 alle ore 09:25 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> Two different AWS AZs are in two different physical locations. Typically
> different cities. Which means that y
Are you using Cassandra reaper?
On Thu, Oct 24, 2019, 12:31 PM Ben Mills wrote:
> Greetings,
>
> Inherited a small Cassandra cluster with some repair issues and need some
> advice on recommended next steps. Apologies in advance for a long email.
>
> Issue:
>
> Intermittent repair failures on
Thanks Reid and Jon!
Yes I will stick with one rack per DC for sure and I will look at the
Vnodes problem later on.
What's the difference in terms of reliability between
A) spreading 2 Datacenters across 3 AZ
B) having 2 Datacenters in 2 separate AZ
?
Best,
Sergio
On Thu, Oct 24, 2019, 7:36
,
Sergio
Il giorno mer 23 ott 2019 alle ore 14:12 Jon Haddad ha
scritto:
> Oh, my bad. There was a flood of information there, I didn't realize you
> had switched to two DCs. It's been a long day.
>
> I'll be honest, it's really hard to read your various options as you've
> intermi
Here we have 2 DC read and write
One Rack per DC
One Availability Zone per DC
Thanks,
Sergio
On Wed, Oct 23, 2019, 1:11 PM Jon Haddad wrote:
> Personally, I wouldn't ever do this. I recommend separate DCs if you want
> to keep workloads separate.
>
> On Wed, Oct 23, 2019 at 4:
TWO us-east-1b 5 write TWO us-east-1b
9. 6 write TWO us-east-1b
Thanks,
Sergio
Il giorno mer 23 ott 2019 alle ore 12:33 Sergio
ha scritto:
> Hi Reid,
>
> Thank you very much for clearing these concepts for me.
> https://community.datastax.com/comments/1133/view.h
for the replies Best, Sergio
Il giorno mer 23 ott 2019 alle ore 10:57 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> No, that’s not correct. The point of racks is to help you distribute the
> replicas, not further-replicate the replicas. Data centers are what do the
&
ase correct me if I am wrong.
Best,
Sergio
Il giorno mer 23 ott 2019 alle ore 09:21 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> Datacenters and racks are different concepts. While they don't have to be
> associated with their historical meanings, the historical me
?
I am thinking to split my cluster in one datacenter for reads and one for
writes and keep all the nodes in the same rack so I can scale up once node at
the time.
Please correct me if I am wrong
Thanks,
Sergio
another thread opened where I am trying to figure out Kernel
Settings for TCP
https://lists.apache.org/thread.html/7708c22a1d95882598cbcc29bc34fa54c01fcb33c40bb616dcd3956d@%3Cuser.cassandra.apache.org%3E
Do you have anything to add to that?
Thanks,
Sergio
Il giorno lun 21 ott 2019 alle ore 15
Thanks Elliott!
How do you know if there is too much RAM used for those settings?
Which metrics do you keep track of?
What would you recommend instead?
Best,
Sergio
On Mon, Oct 21, 2019, 1:41 PM Elliott Sims wrote:
> Based on my experiences, if you have a new enough kernel I'd stron
Hello!
This is the kernel that I am using
Linux 4.16.13-1.el7.elrepo.x86_64 #1 SMP Wed May 30 14:31:51 EDT 2018
x86_64 x86_64 x86_64 GNU/Linux
Best,
Sergio
Il giorno lun 21 ott 2019 alle ore 07:30 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> I don't know whi
with it and
perform a test?
Best,
Sergio
Il giorno lun 21 ott 2019 alle ore 09:27 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> Since the instance size is < 32gb, hopefully swap isn’t being used, so it
> should be moot.
>
>
>
> Sergio, also be aware that -XX:+CMSCla
/usr/share/cassandra/apache-cassandra-3.11.3.jar:/usr/share/cassandra/apache-cassandra-thrift-3.11.3.jar:/usr/share/cassandra/stress.jar:
org.apache.cassandra.service.CassandraDaemon
Best,
Sergio
Il giorno sab 19 ott 2019 alle ore 14:30 Chris Lohfink
ha scritto:
> "It depends" on you
Use Cassandra reaper
On Fri, Oct 18, 2019, 10:12 PM Krish Donald wrote:
> Thanks Manish,
>
> What is the best and fastest way to repair a table using nodetool repair ?
> We are using 256 vnodes .
>
>
> On Fri, Oct 18, 2019 at 10:05 PM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
Hello!
Is it still better to use ParNew + CMS Is it still better than G1GC these days?
Any recommendation for i3.xlarge nodes read-heavy workload?
Thanks,
Sergio
-
To unsubscribe, e-mail: user-unsubscr
PTIMIZE SSD
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/config/configRecommendedSettings.html#OptimizeSSDs
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/config/configRecommendedSettings.html
We ar
10880364 used, 20260904 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 19341960 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20712 cassand+ 20 0 194.1g 14.4g 4.6g S 392.0 48.2 74:50.48 java
20823 sergio.+ 20 0 124856 6304 3136 S 1.7 0.0 0:13.51 htop
7865 root 20 0 1062
Problem:
The cassandra node does not work even after restart throwing this exception:
WARN [Thread-83069] 2019-10-11 16:13:23,713 CustomTThreadPoolServer.java:125 -
Transport error occurred during acceptance of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException:
78 matches
Mail list logo