On Fri, Apr 20, 2018 at 4:08 AM, Aiman Parvaiz wrote:
> Hi all
>
> I have been given a 15 nodes C* 2.2.8 cluster to manage which has a large
> size KS (~800GB).
>
Is this per node or in total?
> Given the size of the KS most of the management tasks like repair take a
> long time to complete an
On Mon, Aug 13, 2018 at 1:31 PM kurt greaves wrote:
> No flag currently exists. Probably a good idea considering the serious
> issues with incremental repairs since forever, and the change of defaults
> since 3.0.
>
Hi Kurt,
Did you mean since 2.2 (when incremental became the default one)? Or
On Mon, Aug 13, 2018 at 3:50 PM Vitali Dyachuk wrote:
> Hello,
> I'm going to follow this documentation to add a new datacenter to the C*
> cluster
>
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
>
> The main step is to run nodetool rebuild which will sy
On Thu, Aug 9, 2018 at 3:46 AM srinivasarao daruna
wrote:
> Hi All,
>
> We have built Cassandra on AWS EC2 instances. Initially when creating
> cluster we have not considered multi-region deployment and we have used AWS
> EC2Snitch.
>
> We have used EBS Volumes to save our data and each of those
On Fri, May 5, 2017 at 1:20 PM Alain RODRIGUEZ wrote:
> Sorry to hear the restart did not help.
>
Hi,
We are hitting the same issue since a few weeks on version 3.0.16.
Normally, restarting an affected node helps, but this is something we would
like to avoid doing.
What makes it worse for us i
On Wed, Aug 29, 2018 at 3:06 AM Maxim Parkachov
wrote:
> couple of days ago I have upgraded Cassandra from 3.11.2 to 3.11.3 and I
> see that repair time is practically doubled. Does someone else experience
> the same regression ?
>
We have upgraded from 3.0.16 to 3.0.17 two days ago and we see t
On Thu, Aug 30, 2018 at 12:05 AM kurt greaves wrote:
> For 10 nodes you probably want to use between 32 and 64. Make sure you use
> the token allocation algorithm by specifying allocate_tokens_for_keyspace
>
We are using 16 tokens with 30 nodes on Cassandra 3.0. And yes, we have
used allocate_t
On Mon, Sep 3, 2018 at 10:41 AM onmstester onmstester
wrote:
> I'm going to add more 6 nodes to my cluster (already has 4 nodesand RF=2)
> using GossipingPropertyFileSnitch, and *NetworkTopologyStrategy and
> default num_tokens = 256.*
> It recommended to join nodes one by one, although there is
On Mon, Sep 3, 2018 at 12:19 PM onmstester onmstester
wrote:
> What i have understood from this part of document is that, when i already
> have node A,B and C in cluster there would be some old data on A,B,C after
> new node D joined the cluster completely which is data streamed to D, then
> if
On Thu, Sep 6, 2018 at 11:50 AM Alain RODRIGUEZ wrote:
>
> Be aware that this behavior happens when the compaction throughput is set
> to *0 *(unthrottled/unlimited). I believe the estimate uses the speed
> limit for calculation (which is often very much wrong anyway).
>
As far as I can remember
On Sat, 8 Sep 2018, 14:47 Jonathan Haddad, wrote:
> 256 tokens is a pretty terrible default setting especially post 3.0. I
> recommend folks use 4 tokens for new clusters,
>
I wonder why not setting it to all the way down to 1 then? What's the key
difference once you have so few vnodes?
with s
On Sat, 8 Sep 2018, 19:00 Jeff Jirsa, wrote:
> Virtual nodes accomplish two primary goals
>
> 1) it makes it easier to gradually add/remove capacity to your cluster by
> distributing the new host capacity around the ring in smaller increments
>
> 2) it increases the number of sources for streamin
Hello,
We have some tables with significant amount of TTLd rows that have expired
by now (and more gc_grace_seconds have passed since the TTL). We have
stopped writing more data to these tables quite a while ago, so background
compaction isn't running. The compaction strategy is the default
Size
an 'nodetool garbagecollect' - that command is not available in
the version we are using. It only became available in 3.10.
--
Alex
>
> *From: *Oleksandr Shulgin
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Monday, September 10, 2018 at 6:53 AM
> *To: *"use
for us.
Thanks,
--
Alex
On Mon, Sep 10, 2018 at 10:29 AM Charulata Sharma (charshar)
> wrote:
>
>> Scrub takes a very long time and does not remove the tombstones. You
>> should do garbage cleaning. It immediately removes the tombstones.
>>
>>
>>
>> Thaks,
On Mon, Sep 10, 2018 at 10:03 PM Jeff Jirsa wrote:
> How much free space do you have, and how big is the table?
>
So there are 2 tables, one is around 120GB and the other is around 250GB on
every node. On the node with the most free disk space we still have around
500GB available and on the nod
On Tue, Sep 11, 2018 at 9:31 AM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
> As far as I remember, in newer Cassandra versions, with STCS, nodetool
> compact offers a ‘-s’ command-line option to split the output into files
> with 50%, 25% … in size, thus in this case, not a sin
On Tue, Sep 11, 2018 at 9:47 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Tue, Sep 11, 2018 at 9:31 AM Steinmaurer, Thomas <
> thomas.steinmau...@dynatrace.com> wrote:
>
>> As far as I remember, in newer Cassandra versions, with STCS, nodetool
>&
On Tue, Sep 11, 2018 at 9:47 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Tue, Sep 11, 2018 at 9:31 AM Steinmaurer, Thomas <
> thomas.steinmau...@dynatrace.com> wrote:
>
>> As far as I remember, in newer Cassandra versions, with STCS, nodetool
>&
On Tue, Sep 11, 2018 at 11:07 AM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
>
> a single (largish) SSTable or any other SSTable for a table, which does
> not get any writes (with e.g. deletes) anymore, will most likely not be
> part of an automatic minor compaction anymore, thu
On Tue, Sep 11, 2018 at 10:04 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> Yet another surprising aspect of using `nodetool compact` is that it
> triggers major compaction on *all* nodes in the cluster at the same time.
> I don't see where this is
ksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
>> On Tue, Sep 11, 2018 at 9:47 AM Oleksandr Shulgin <
>> oleksandr.shul...@zalando.de> wrote:
>>
>>> On Tue, Sep 11, 2018 at 9:31 AM Steinmaurer, Thomas <
>>> thomas.steinmau...@dynatrac
On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Tue, 11 Sep 2018, 19:26 Jeff Jirsa, wrote:
>
>> Repair or read-repair
>>
>
> Could you be more specific please?
>
> Why any data would be streamed in if there is no (
On Mon, Sep 17, 2018 at 4:04 PM Jeff Jirsa wrote:
> Again, given that the tables are not updated anymore from the application
> and we have repaired them successfully multiple times already, how can it
> be that any inconsistency would be found by read-repair or normal repair?
>
> We have seen th
On Mon, Sep 17, 2018 at 4:41 PM Jeff Jirsa wrote:
> Marcus’ idea of row lifting seems more likely, since you’re using STCS -
> it’s an optimization to “lift” expensive reads into a single sstable for
> future reads (if a read touches more than - I think - 4? sstables, we copy
> it back into the m
On Mon, Sep 17, 2018 at 4:29 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> Thanks for your reply! Indeed it could be coming from single-SSTable
> compaction, this I didn't think about. By any chance looking into
> compaction_history table could be
On Tue, Sep 18, 2018 at 10:38 AM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
>
> any indications in Cassandra log about insufficient disk space during
> compactions?
>
Bingo! The following was logged around the time compaction was started
(and I only looked around when it was
Hello,
Our setup is as follows:
Apache Cassandra: 3.0.17
Cassandra Reaper: 1.3.0-BETA-20180830
Compaction: {
'class': 'TimeWindowCompactionStrategy',
'compaction_window_size': '30',
'compaction_window_unit': 'DAYS'
}
We have two column families which differ only in the
On Mon, Sep 24, 2018 at 10:50 AM Jeff Jirsa wrote:
> Do your partitions span time windows?
Yes.
--
Alex
On Mon, 24 Sep 2018, 13:08 Jeff Jirsa, wrote:
> The data structure used to know if data needs to be streamed (the merkle
> tree) is only granular to - at best - a token, so even with subrange repair
> if a byte is off, it’ll stream the whole partition, including parts of old
> repaired sstables
>
Hello,
On our production cluster of 30 Apache Cassandra 3.0.17 nodes we have
observed that only one node started to show about 2 times the CPU
utilization as compared to the rest (see screenshot): up to 30% vs. ~15% on
average for the other nodes.
This started more or less immediately after repai
On Wed, Sep 26, 2018 at 1:07 PM Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
>
> Looking at information you have provided, the increased CPU utilisation
> could be because of repair running on the node.
> Repairs are resource intensive operations.
>
> Restarting the node should have hal
On Thu, Sep 27, 2018 at 2:24 AM Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
>
> Most of the things look ok from your setup.
>
> You can enable Debug logs for repair duration.
> This will help identify if you are hitting a bug or other cause of unusual
> behaviour.
>
> Just a remote pos
On Mon, Oct 1, 2018 at 12:18 PM onmstester onmstester
wrote:
>
> What if instead of running that python and having one node with non-vnode
> config, i remove the first seed node and re-add it after cluster was fully
> up ? so the token ranges of first seed node would also be assigned by
> Allocat
On Fri, Oct 19, 2018 at 10:23 AM Jeff Jirsa wrote:
> It depends on your yaml settings - in newer versions you can have
> cassandra only purge repaired tombstones (and ttl’d data is a tombstone)
>
Interesting. Which setting is that? Is it 4.0 or 3.x -- I couldn't find
anything similar in the 3.
On Fri, Oct 19, 2018 at 11:04 AM Jeff Jirsa wrote:
>
> I’m mobile and can’t check but it’s this JIRA
>
> https://issues.apache.org/jira/browse/CASSANDRA-6434
>
> And it may be a table level prop, I suppose. Again, I’m not in a position
> to confirm.
>
Indeed, it's called only_purge_repaired_tomb
On Fri, Nov 2, 2018 at 5:15 PM Lou DeGenaro wrote:
> I'm looking to hear how others are coping with snapshots.
>
> According to the doc:
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupDeleteSnapshot.html
>
> *When taking a snapshot, previous snapshot files are not auto
On Sat, Nov 3, 2018 at 1:13 AM Brian Spindler
wrote:
> That wasn't horrible at all. After testing, provided all goes well I can
> submit this back to the main TWCS repo if you think it's worth it.
>
> Either way do you mind just reviewing briefly for obvious mistakes?
>
>
> https://github.com/bs
On Thu, Nov 8, 2018 at 10:42 PM Yuji Ito wrote:
>
> We are working on Jepsen testing for Cassandra.
> https://github.com/scalar-labs/jepsen/tree/cassandra/cassandra
>
> As you may know, Jepsen is a framework for distributed systems
> verification.
> It can inject network failure and so on and che
On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk wrote:
>
> We have recently met a problem when we added 60 nodes in 1 region to the
> cluster
> and set an RF=60 for the system_auth ks, following this documentation
> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
>
Sad
On Fri, Nov 30, 2018 at 5:13 PM Oliver Herrmann
wrote:
>
> I'm always getting the message "Skipping file mc-11-big-Data.db: table
> snapshots.table3 doesn't exist". I also tried to rename the snapshots
> folder into the keyspace name (cass_testapp) but then I get the message
> "Skipping file mc-1
On Fri, 30 Nov 2018, 17:54 Oliver Herrmann When using nodetool refresh I must have write access to the data folder
> and I have to do it on every node. In our production environment the user
> that would do the restore does not have write access to the data folder.
>
OK, not entirely sure that's
On Mon, Dec 3, 2018 at 4:24 PM Oliver Herrmann
wrote:
>
> You are right. The number of nodes in our cluster is equal to the
> replication factor. For that reason I think it should be sufficient to call
> sstableloader only from one node.
>
The next question is then: do you care about consistency
Hello,
We are running the following setup on AWS EC2:
Host system (AWS AMI): Ubuntu 14.04.4 LTS,
Linux 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5
08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Cassandra process runs inside a docker container.
Docker image is based on Ubuntu 18.04.1 L
On Wed, 5 Dec 2018, 19:53 Jonathan Haddad Seeing high kswapd usage means there's a lot of churn in the page cache.
> It doesn't mean you're using swap, it means the box is spending time
> clearing pages out of the page cache to make room for the stuff you're
> reading now.
>
Jon,
Thanks for your
On Wed, 5 Dec 2018, 19:34 Riccardo Ferrari Hi Alex,
>
> I saw that behaviout in the past.
>
Riccardo,
Thank you for the reply!
Do you refer to kswapd issue only or have you observed more problems that
match behavior I have described?
I can tell you the kswapd0 usage is connected to the `disk_a
On Thu, Dec 6, 2018 at 11:14 AM Riccardo Ferrari wrote:
>
> I had few instances in the past that were showing that unresponsivveness
> behaviour. Back then I saw with iotop/htop/dstat ... the system was stuck
> on a single thread processing (full throttle) for seconds. According to
> iotop that w
On Thu, Dec 6, 2018 at 3:39 PM Riccardo Ferrari wrote:
> To be honest I've never seen the OOM in action on those instances. My Xmx
> was 8GB just like yours and that let me think you have some process that is
> competing for memory, is it? Do you have any cron, any backup, anything
> that can tri
On Mon, Dec 10, 2018 at 12:20 PM Riccardo Ferrari
wrote:
> I am wondering what instance type is best for a small cassandra cluster on
> AWS.
>
Define "small" :-D
> Actually I'd like to compare, or have your opinion about the following
> instances:
>
>- r5*d*.xlarge (4vCPU, *19*ecu, 32GB ra
On Mon, Dec 10, 2018 at 3:23 PM Riccardo Ferrari wrote:
>
> By "small" I mean that currently I have a 6x m1.xlarge instances running
> Cassandra 3.0.17. Total amount of data is around 1.5TB spread across couple
> of keypaces wih RF:3.
>
> Over time few things happened/became clear including:
>
>
On Mon, Dec 17, 2018 at 11:44 AM Riccardo Ferrari
wrote:
> I am having "the same" issue.
> One of my nodes seems to have some hardware struggle, out of 6 nodes (same
> instance size) this one is likely to be makred down, it consntantly
> compacting, high system load, it's just a big pain.
>
> My
On Fri, Dec 7, 2018 at 12:43 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> After a fresh JVM start the memory allocation looks roughly like this:
>
> total used free sharedbuffers cached
> Mem: 14G14G
On Mon, Jan 7, 2019 at 3:37 PM Jonathan Ballet wrote:
>
> I'm working on how we could improve the upgrades of our servers and how to
> replace them completely (new instance with a new IP address).
> What I would like to do is to replace the machines holding our current
> seeds (#1 and #2 at the m
Hi,
The latest release notes for all versions mention that logback < 1.2.0 is
subject to CVE-2017-5929 and that the logback version is not upgraded.
E.g:
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-3.0.18
Indeed, when installing 3.0.18 from t
On Tue, Feb 12, 2019 at 7:02 PM Michael Shuler
wrote:
> If you are not using the logback SocketServer and ServerSocketReceiver
> components, the CVE doesn't affect your server with logback 1.1.3.
>
So the idea is that as long as logback.xml doesn't configure any of the
above, we are fine with th
On Wed, Feb 13, 2019 at 5:31 AM Jeff Jirsa wrote:
> The most likely result of not running cleanup is wasted disk space.
>
> The second most likely result is resurrecting deleted data if you do a
> second range movement (expansion, shrink, etc).
>
> If this is bad for you, you should run cleanup n
On Wed, Feb 13, 2019 at 4:40 PM Jeff Jirsa wrote:
> Some people who add new hosts rebalance the ring afterward - that
> rebalancing can look a lot like a shrink.
>
You mean by moving the tokens? That's only possible if one is not using
vnodes, correct?
I also believe, but don’t have time to pr
On Wed, Feb 13, 2019 at 6:47 PM Jeff Jirsa wrote:
> Depending on how bad data resurrection is, you should run it for any host
> that loses a range. In vnodes, that's usually all hosts.
>
> Cleanup with LCS is very cheap. Cleanup with STCS/TWCS is a bit more work.
>
Wait, doesn't cleanup just rew
On Thu, Feb 14, 2019 at 4:39 PM Jeff Jirsa wrote:
>
> Wait, doesn't cleanup just rewrite every SSTable one by one? Why would
compaction strategy matter? Do you mean that after cleanup STCS may pick
some resulting tables to re-compact them due to the min/max size
difference, which would not be th
On Tue, Feb 26, 2019 at 9:39 AM wxn...@zjqunshuo.com
wrote:
>
> I'm running 2.2.8 with vnodes and I'm planning to change node IP address.
> My procedure is:
> Turn down one node, setting auto_bootstrap to false in yaml file, then
> bring it up with -Dcassandra.replace_address. Repeat the procedur
On Tue, Feb 26, 2019 at 3:26 PM Durity, Sean R
wrote:
> This has not been my experience. Changing IP address is one of the worst
> admin tasks for Cassandra. System.peers and other information on each nodes
> is stored by ip address. And gossip is really good at sending around the
> old informati
On Wed, Feb 27, 2019 at 4:15 AM wxn...@zjqunshuo.com
wrote:
> >After restart with the new address the server will notice it and log a
> warning, but it will keep token ownership as long as it keeps the old host
> id (meaning it must use the same data directory as before restart).
>
> Based on my
On Wed, Feb 27, 2019 at 3:11 PM Durity, Sean R
wrote:
> We use the PropertyFileSnitch precisely because it is the same on every
> node. If each node has to have a different file (for GPFS) – deployment is
> more complicated. (And for any automated configuration you would have a
> list of hosts an
On Tue, Mar 5, 2019 at 2:24 PM Jeff Jirsa wrote:
> Ec2 multi should work fine in one region, but consider using
> GossipingPropertyFileSnitch if there’s even a chance you’ll want something
> other than AWS regions as dc names - multicloud, hybrid, analytics DCs, etc
>
For the record, DC names ca
On Mon, Mar 25, 2019 at 11:13 PM Carl Mueller
wrote:
>
> Since the internal IPs are given when the client app connects to the
> cluster, the client app cannot communicate with other nodes in other
> datacenters.
>
Why should it? The client should only connect to its local data center and
leave
On Tue, Mar 26, 2019 at 5:49 PM Carl Mueller
wrote:
> Looking at the code it appears it shouldn't matter what we set the yaml
> params to. The Ec2MultiRegionSnitch should be using the aws metadata
> 169.254.169.254 to pick up the internal/external ips as needed.
>
This is somehow my expectation
On Tue, Mar 26, 2019 at 10:28 PM Carl Mueller
wrote:
> - the AWS people say EIPs are a PITA.
>
Why?
> - if we hardcode the global IPs in the yaml, then yaml editing is required
> for the occaisional hard instance reboot in aws and its attendant global ip
> reassignment
> - if we try leaving br
On Wed, Mar 27, 2019 at 6:36 PM Carl Mueller
wrote:
>
> EIPs per the aws experts cost money,
>
>From what I know they only cost you when you're not using them. This page
shows that you are also charged if you remap them too often (more then 100
times a month), this I didn't realize:
https://aws
Hello,
We've just noticed that we cannot install older minor releases of Apache
Cassandra from Debian packages, as described on this page:
http://cassandra.apache.org/download/
Previously we were doing the following at the last step: apt-get install
cassandra==3.0.17
Today it fails with error:
E
On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX) <
sbha...@bloomberg.net> wrote:
>
> The standard procedure for doing this seems to be add a 3rd datacenter to
> the cluster, stream data to the new datacenter via nodetool rebuild, then
> decommission the old datacenter. A more detai
On Wed, Apr 3, 2019 at 4:37 PM Saleil Bhat (BLOOMBERG/ 731 LEX) <
sbha...@bloomberg.net> wrote:
>
> Thanks for the reply! One clarification: the replacement node WOULD be
> DC-local as far as Cassandra is is concerned; it would just be in a
> different physical DC. Using the Orlando -> Tampa examp
On Wed, Apr 3, 2019 at 4:23 PM David Taylor wrote:
>
> $ nodetest status
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at
> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>
Could it be that your user doesn't have permission
On Wed, Jun 12, 2019 at 4:02 PM Jeff Jirsa wrote:
> To avoid violating consistency guarantees, you have to repair the replicas
> while the lost node is down
>
How do you suggest to trigger it? Potentially replicas of the primary
range for the down node are all over the local DC, so I would go w
On Thu, Jun 13, 2019 at 10:36 AM R. T.
wrote:
>
> Well, actually by running cfstats I can see that the totaldiskspaceused is
> about ~ 1.2 TB per node in the DC1 and ~ 1 TB per node in DC2. DC2 was off
> for a while thats why there is a difference in space.
>
> I am using Cassandra 3.0.6 and
> my
On Thu, Jun 13, 2019 at 11:28 AM Léo FERLIN SUTTON
wrote:
>
> ## Cassandra configuration :
> 4 concurrent_compactors
> Current compaction throughput: 150 MB/s
> Concurrent reads/write are both set to 128.
>
> I have also temporarily stopped every repair operations.
>
> Any ideas about how I can s
On Thu, Jun 13, 2019 at 2:07 PM Léo FERLIN SUTTON
wrote:
>
> Overall we are talking about a 1.08TB table, using LCS.
>
> SSTable count: 1047
>> SSTables in each level: [15/4, 10, 103/100, 918, 0, 0, 0, 0, 0]
>
> SSTable Compression Ratio: 0.5192269874287099
>
> Number of partitions (estimate): 7
On Thu, Jun 13, 2019 at 2:09 PM Léo FERLIN SUTTON
wrote:
> Last, but not least: are you using the default number of vnodes, 256? The
>> overhead of large number of vnodes (times the number of nodes), can be
>> quite significant. We've seen major improvements in repair runtime after
>> switching
On Thu, Jun 13, 2019 at 3:16 PM Jeff Jirsa wrote:
> On Jun 13, 2019, at 2:52 AM, Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
> On Wed, Jun 12, 2019 at 4:02 PM Jeff Jirsa wrote:
>
> To avoid violating consistency guarantees, you have to repair the replicas
&
On Thu, Jun 13, 2019 at 3:41 PM Jeff Jirsa wrote:
>
> Bootstrapping a new node does not require repairs at all.
>
Was my understanding as well.
Replacing a node only requires repairs to guarantee consistency to avoid
> violating quorum because streaming for bootstrap only streams from one
> rep
On Sat, Jun 15, 2019 at 4:31 PM Nimbus Lin wrote:
> Dear cassandra's pioneers:
> I am a 5 years' newbie, it is until now that I have time to use
> cassandra. but I cann't check cassandra's high availabily when I stop a
> seed node or none seed DN as CGE or Greenplum.
> Would someone can
On Mon, Jun 17, 2019 at 9:30 AM Anurag Sharma
wrote:
>
> We are upgrading Cassandra from 1.25 to 3.X. Just curious if there is any
> recommended open source utility for the same.
>
Hi,
The "recommended open source utility" is the Apache Cassandra itself. ;-)
Given the huge difference between
On Tue, Jun 18, 2019 at 3:08 AM Jeff Jirsa wrote:
> Yes - the incomplete sstable will be deleted during startup (in 3.0 and
> newer there’s a transaction log of each compaction in progress - that gets
> cleaned during the startup process)
>
Wait, does that mean that in pre-3.0 versions one can
On Tue, Jun 18, 2019 at 8:06 AM ANEESH KUMAR K.M wrote:
>
> I am using Cassandra cluster with 3 nodes which is hosted on AWS. Also we
> have NodeJS web Application which is on AWS ELB. Now the issue is that,
> when I add 2 or more servers (nodeJS) in AWS ELB then the delete queries
> are not work
On Fri, Jun 21, 2019 at 7:02 PM Nitan Kainth wrote:
>
> we upgraded binaries from 3.0 to 4.0.
>
Where did you get the binaries for 4.0? It is not released officially yet,
so I guess you were using SVN trunk? Or was there a pre-release?
we run major compaction periodically for some valid reaso
On Fri, Jun 28, 2019 at 3:14 AM Voytek Jarnot
wrote:
> Curious if anyone could shed some light on this. Trying to set up a
> 4-node, one DC (for now, same region, same AZ, same VPC, etc) cluster in
> AWS.
>
> All nodes have the following config (everything else basically standard):
> cassandra.ya
On Fri, Jun 28, 2019 at 8:37 AM Ayub M wrote:
> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
> The cluster is up and running, took snapshot of the keyspaces volume.
>
> Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another cluste
On Fri, Jun 28, 2019 at 3:57 PM Marc Richter wrote:
>
> How is this dealt with in Cassandra? Is setting up firewalls the only
> way to allow only some nodes to connect to the ports 7000/7001?
>
Hi,
You can set
server_encryption_options:
internode_encryption: all
...
and distribute the
On Fri, Jun 28, 2019 at 11:29 PM Jeff Jirsa wrote:
> you often have to run repair after each increment - going from 3 -> 5
> means 3 -> 4, repair, 4 -> 5 - just going 3 -> 5 will violate consistency
> guarantees, and is technically unsafe.
>
Jeff,
How going from 3 -> 4 is *not violating* consi
On Sat, Jun 29, 2019 at 5:49 AM Jeff Jirsa wrote:
> If you’re at RF= 3 and read/write at quorum, you’ll have full visibility
> of all data if you switch to RF=4 and continue reading at quorum because
> quorum if 4 is 3, so you’re guaranteed to overlap with at least one of the
> two nodes that got
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote:
>
> On the 2nd question, would you like to tell me how to change a
> write's and a read's consistency level separately in cqlsh?
>
Not that I know of special syntax for that, but you may add an explicit
"CONSISTENCY " command before every c
On Thu, Jul 11, 2019 at 5:04 PM Voytek Jarnot
wrote:
> My google-fu is failing me this morning. I'm looking for any tips on
> splitting a 2 DC cluster into two separate clusters. I see a lot of docs
> about decomissioning a datacenter, but not much in the way of disconnecting
> datacenters into i
On Mon, Jul 15, 2019 at 6:20 PM Carl Mueller
wrote:
> Related to our overstreaming, we have a cluster of about 25 nodes, with
> most at about 1000 sstable files (Data + others).
>
> And about four that are at 20,000 - 30,000 sstable files (Data+Index+etc).
>
> We have vertically scaled the outlie
On Tue, Jul 16, 2019 at 5:54 PM Carl Mueller
wrote:
> stays consistently in the 40-60 range, but only recent tables are being
> compacted.
>
I would be alarmed at this point. It definitely feels like not aggressive
enough compaction: can you relax the throttling or afford to have more
concurren
On Mon, Jul 29, 2019 at 1:21 PM Rahul Reddy
wrote:
>
> Decommissioned 2 nodes from cluster nodetool status doesn't list the
> nodes as expected but jmx metrics shows still those 2 nodes has down.
> Nodetool gossip shows the 2 nodes in Left state. Why does my jmx still
> shows those nodes down ev
On Tue, Jul 30, 2019 at 12:11 PM Rhys Campbell
wrote:
> Are you sure it says to use assassinate as the first resort? Definately
> not the case
>
It does. I think the reason is that it says that you should run a full
repair first, and before that--stop writing to the DC being
decommissioned. Th
On Tue, Jul 30, 2019 at 12:34 PM Vlad wrote:
> Restarting Cassandra helped.
>
But for how long?..
--
Alex
On Wed, Jul 31, 2019 at 7:10 AM Martin Xue wrote:
> Hello,
>
> Good day. This is Martin.
>
> Can someone help me with the following query regarding Cassandra repair
> and compaction?
>
Martin,
This blog post from The Last Pickle provides an in-depth explanation as
well as some practical advice:
>
> > 3. what is the best strategy under this senario?
>
> Go to RF=3 or read and write at quorum so you’re doing 3/4 instead of 2/2
Jeff, did you mean "2/3 vs. 2/2"?
-Alex
On Mon, Aug 5, 2019 at 8:50 AM nokia ceph wrote:
> Hi Community,
>
> I am using Cassanadra 3.0.13 . 5 node cluster simple topology. Following
> are the timeout parameters in yaml file:
>
> # grep timeout /etc/cassandra/conf/cassandra.yaml
> cas_contention_timeout_in_ms: 1000
> counter_write_requ
On Thu, Mar 14, 2019 at 9:55 PM Jonathan Haddad wrote:
> My coworker Alex (from The Last Pickle) wrote an in depth blog post on
> TWCS. We recommend not running repair on tables that use TWCS.
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
>
Hi,
I was wondering about this again,
1 - 100 of 212 matches
Mail list logo