Thank you for writing this. The post is really very helpful.
One question - My understanding is GC tuning depends a lot on the
read/write workload and the data size. What will be the right way to
simulate the production workload on a non-production environment in
cassandra world.
On Wed, Apr 11,
Hello Hassaan,
We use cassandra helm chart[0] for deploying cassandra over kubernetes in
production. We have around 200GB cas data. It works really well. You can
scale up nodes easily (I haven't tested scaling down).
I would say that if you are worried about running cassandra over k8s in
producti
Hi Michael,
We have faced the same situation as yours in our production environment
where we suddenly got "Unknown CF Exception" for materialized views too. We
are using Lagom apps with cassandra for persistence. In our case, since
these views can be regenerated from the original events, we were a
I doubt mv will run instantly because copy is across two different
filesystems
On Sun, 24 Jun 2018 at 9:26 PM, Nitan Kainth wrote:
> To be safe you could follow below prices on each node one at a time:
> Stop Cassandra
> Move sstable— mv will do it instantly
> Start Cassandra
>
> If you do it on
Just curious -
>From which instance type are you migrating to i3 type and what are the
reasons to move to i3 type ?
Are you going to take benefit from NVMe instance storage - if yes, how ?
Since we are also migrating our cluster on AWS - but we are currently using
r4 instance, so i was intereste
Isnt NVMe storage an instance storage ie. the data will be lost in case the
instance restarts. How are you going to make sure that there is no data
loss in case instance gets rebooted?
On Fri, 29 Jun 2018 at 7:00 PM, Randy Lynn wrote:
> GPFS - Rahul FTW! Thank you for your help!
>
> Yes, Pradeep
Ohh i see now. It makes sense. Thanks a lot.
On Fri, Jun 29, 2018 at 9:17 PM, Randy Lynn wrote:
> data is only lost if you stop the node. between restarts the storage is
> fine.
>
> On Fri, Jun 29, 2018 at 10:39 AM, Pradeep Chhetri
> wrote:
>
>> Isnt NVMe storage an
Hello,
I am currently running a 3.11.2 cluster in SimpleSnitch hence the
datacenter is datacenter1 and rack is rack1 for all nodes on AWS. I want to
switch to GPFS by changing the rack name to the availability-zone name and
datacenter name to region name.
When I try to restart individual nodes by
Dcassandra.ignore_dc=true
-Dcassandra.ignore_rack=true"
Regards,
Pradeep
On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri
wrote:
> Hello,
>
> I am currently running a 3.11.2 cluster in SimpleSnitch hence the
> datacenter is datacenter1 and rack is rack1 for all nodes on AWS. I
mation provided by the new snitch.
>
>
> If the topology of the network has changed, but no datacenters are added:
>> a. Shut down all the nodes, then restart them.
>> b. Run a sequential repair and nodetool cleanup on each node.
>
>
> On Sun, Aug 26, 2018 at 11:14 A
>> Pradeep.
>>>
>>> Right, so from that documentation is sounds like you actually have to
>>> stop all nodes in the cluster at once and bring them back up one at a time.
>>> A rolling restart won't work here.
>>>
>>> On Sun, Aug 26,
You may want to try upgrading to 3.11.3 instead which has some memory leaks
fixes.
On Tue, Aug 28, 2018 at 9:59 AM, Mun Dega wrote:
> I am surprised that no one else ran into any issues with this version. GC
> can't catch up fast enough and there is constant Full GC taking place.
>
> The result
Hello Eunsu,
I am going through the same exercise at my job. I was making notes as i was
testing the steps in my preproduction environment. Although I haven't
tested end to end but hopefully this might help you:
https://medium.com/p/465e9bf28d99
Regards,
Pradeep
On Mon, Sep 10, 2018 at 5:59 PM,
Hello
I am running cassandra 3.11.3 5-node cluster on AWS with SimpleSnitch. I
was testing the process to migrate to GPFS using AWS region as the
datacenter name and AWS zone as the rack name in my preprod environment and
was able to achieve it.
But before decommissioning the older datacenter, I
Hello everyone,
Can someone please help me in validating the steps i am following to
migrate cassandra snitch.
Regards,
Pradeep
On Wed, Sep 12, 2018 at 1:38 PM, Pradeep Chhetri
wrote:
> Hello
>
> I am running cassandra 3.11.3 5-node cluster on AWS with SimpleSnitch. I
> was
should never
> be in the list of seeds unless it's the first node of the cluster. Add
> nodes, then make them seeds.
>
>
> Le lun. 17 sept. 2018 à 11:25, Pradeep Chhetri a
> écrit :
>
>> Hello everyone,
>>
>> Can someone please help me in validating the
; and/or password are incorrect
>
> I was using DCAwareRoundRobinPolicy, but I guess it's probably because of
> the withUsedHostsPerRemoteDc option.
>
> I took several steps and the error log disappeared. It is probably
> ’nodetool rebuild' after altering the system_auth ta
altering keyspace.
>
> Do your clients have the 'withUsedHostsPerRemoteDc' option?
>
>
> On 18 Sep 2018, at 1:17 PM, Pradeep Chhetri wrote:
>
> Hello Eunsu,
>
> I am also using PasswordAuthenticator in my cassandra cluster. I didn't
> come across thi
Hello everyone,
We had some issues yesterday in our 3 nodes cluster where the application
tried to create the same table twice quickly and cluster became unstable.
Temporarily, we reduced it to single node cluster which gave us some relief.
Now when we are trying to bootstrap a new node and add
ertified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> *linkedin.com/in/carlosjuzarterolo
> <http://linkedin.com/in/carlosjuzarterolo>*
> Mobile: +351 918 918 100
> www.pythian.com
>
> On
Got the cluster to converge on same schema by restarting just the one
having different version.
Thanks.
On Thu, Oct 12, 2017 at 2:23 PM, Pradeep Chhetri
wrote:
> Hi Carlos,
>
> Thank you for the reply.
>
> I am running 3.9 cassandra version.
>
> I am also not sure what aff
Hi,
I am trying to restore an empty 3-node cluster with the three snapshots
taken on another 3-node cluster.
What is the best approach to achieve it without loosing any data present in
the snapshot.
Thank you.
Pradeep
e
> sure that the new node and the old node have the same tokens.
>
>
> Saludos
>
> Jean Carlo
>
> "The best way to predict the future is to invent it" Alan Kay
>
> On Mon, Oct 16, 2017 at 1:40 PM, Pradeep Chhetri
> wrote:
>
>> Hi,
>>
a nodetool refresh. Checking obviously the correct names
> of your sstables.
> You can check the tokens of your node using nodetool info -T
>
> But I think sstableloader is the easy way :)
>
>
>
>
> Saludos
>
> Jean Carlo
>
> "The best way to predict th
;>
>>
>>
>>
>> Saludos
>>
>> Jean Carlo
>>
>> "The best way to predict the future is to invent it" Alan Kay
>>
>> On Mon, Oct 16, 2017 at 1:55 PM, Pradeep Chhetri
>> wrote:
>>
>>> Hi Jean,
>>>
>&g
s
>
> On Oct 18, 2017 4:22 AM, "Pradeep Chhetri" wrote:
>
> Hi Anthony
>
> I did the following steps to restore. Please let me know if I missed
> something.
>
> - Took snapshots on the 3 nodes of the existing cluster simultaneously
> - copied that snapshots
Hi,
We are taking daily snapshots for backing up our cassandra data and then
use our backups to restore in a different environment. I would like to
verify that the data is consistent and all the data during the time backup
was taken is actually restored.
Currently I just count the number of rows
to generate a file, from source and destination. After
> that you can use diff tool.
>
> On Mon, Oct 30, 2017 at 10:11 PM Pradeep Chhetri
> wrote:
>
>> Hi,
>>
>> We are taking daily snapshots for backing up our cassandra data and then
>> use our backups to restor
Hello all,
I am trying to add a 4th node to a 3-node cluster which is using
SimpleSnitch. But this new node is stuck in Joining state for last 20
hours. We have around 10GB data per node with RF as 3.
Its mostly stuck in redistributing index summaries phase.
Here are the logs:
https://gist.gith
t; system.log and debug.log
>
> On 17 December 2017 at 11:19, Pradeep Chhetri
> wrote:
>
>> Hello all,
>>
>> I am trying to add a 4th node to a 3-node cluster which is using
>> SimpleSnitch. But this new node is stuck in Joining state for last 20
>> h
Hello everyone,
We are running cassandra cluster inside containers over Kubernetes. We have
a requirement where we need to restore a fresh new cluster with existing
snapshot on weekly basis.
Currently, while doing it manually. i need to copy the snapshot folder
inside container and then run sstab
ompactions after refreshing.
>
>
> Saludos
>
> Jean Carlo
>
> "The best way to predict the future is to invent it" Alan Kay
>
> On Thu, Jan 11, 2018 at 9:58 AM, Pradeep Chhetri
> wrote:
>
>> Hello everyone,
>>
>> We are running cassandra cluste
ystem node by node.
>
> So you will have the same cluster( cluster name, confs, etc)
>
>
> Saludos
>
> Jean Carlo
>
> "The best way to predict the future is to invent it" Alan Kay
>
> On Thu, Jan 11, 2018 at 10:28 AM, Pradeep Chhetri
> wrote:
>
>&
Doing so risks merging your new cluster
> and old cluster if they’re able to reach each other.
>
> --
> Jeff Jirsa
>
>
> On Jan 11, 2018, at 1:41 AM, Pradeep Chhetri
> wrote:
>
> Thank you very much Jean. Since i don't have any constraints, as you said,
> i w
Hi Anshu,
We used to have similar requirements in my workplace.
We tried multiple options like snapshot and restore it but the best one
which worked for us was making a same number of nodes of cas cluster in
preprod and doing a parallel scp of the data directly from production to
preprod and then
35 matches
Mail list logo