On Thu, Oct 8, 2020 at 12:21 AM Manu Chadha wrote:
> Hi
>
>
>
> I have created a Cassandra cluster on Kubernetes using cass-operator on
> gcp. It is for my personal experimentation. To avoid incurring cost, I
> want to stop the cluster when I am not using it and start it when I need it
> without
On Sat, Sep 5, 2020 at 5:55 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Hi
>
> We have been forced into rolling back our Cassandra after 1 node upgrade.
> The node was upgraded 10 days ago. We have the backup of the old data.
>
> Strategy one which we are thinking :
> 1.
On Wed, Sep 2, 2020 at 7:54 PM Tobias Eriksson
wrote:
> Hi
>
> I am monitoring a 20+ node cluster, and 5 of them has an excessive GC
> Count
>
> Typically the nodes has 40-60,000 GC runs, but a handful has 4,000,000 GC
> runs
>
> And it is not temporary this is the same all the time, see
On Fri, Aug 21, 2020 at 9:43 AM Tobias Eriksson
wrote:
> Isn’t it so that explicitly setting a column to NULL also result in a
> tombstone
>
True, thanks for pointing that out!
Then as mentioned the use of list,set,map can also result in tombstones
>
> See
>
On Fri, Aug 21, 2020 at 7:57 AM Attila Wind wrote:
> Hi Cassandra Gurus,
>
> Recently I captured a very interesting warning in the logs saying
>
> 2020-08-19 08:08:32.492
> [cassandra-client-keytiles_data_webhits-nio-worker-2] WARN
> com.datastax.driver.core.RequestHandler - Query '[3 bound
On Tue, Jul 21, 2020 at 12:06 AM Mick Semb Wever wrote:
> The Cassandra team is pleased to announce the release of Apache
> Cassandra version 4.0-beta1.
>
> Apache Cassandra is a fully distributed database. It is the right
> choice when you need scalability and high availability without
>
On Thu, Jul 9, 2020 at 5:54 PM Gil Ganz wrote:
> That sounds very interesting Alex, so just to be sure I understand, it was
> like this
> 1 - you had 2.1 cluster running with the 2.1 version jar
> 2 - you upgraded to 3.0, starting the cluster with the 3.0 version jar
> that has the same strategy
On Thu, Jul 9, 2020 at 3:38 PM Gil Ganz wrote:
> Hey
> I have a 2.11.4 cluster with tables that are defined with twcs, using
> jeff's jar
>
> https://github.com/jeffjirsa/twcs
>
> All working great, but now I want to upgrade to 3.11, and I have a
> problem, cassandra won't start, it fails on the
On Thu, Jul 2, 2020 at 9:29 AM Manu Chadha wrote:
> Thanks Alex. Will give this a try. So I just change the yaml file and
> hot-patch it or would I need to stop the cluster, delete it and make a new
> one?
>
I've no experience with this specific operator, but I expect that editing
the file and
On Thu, Jul 2, 2020 at 9:21 AM Manu Chadha wrote:
> Hi
>
>
>
> I have created a cluster using cass-operator -
> https://github.com/datastax/cass-operator. I suppose by default 3
> replicas (3 node cluster) are created.
>
>
>
> I want to scale down the cluster from 3 to 1 nodes? I executed
>
>
>
On Fri, May 22, 2020 at 9:51 PM Jeff Jirsa wrote:
> With those consistency levels it’s already possible you don’t see your
> writes, so you’re already probably seeing some of what would happen if you
> went to RF=5 like that - just less common
>
> If you did what you describe you’d have a 40%
On Mon, Mar 23, 2020 at 5:49 AM Shishir Kumar
wrote:
> Hi,
>
> Is it possible to get/predict how much time it will take for *nodetool
> -pr *to complete on a node? Currently in one of my env (~800GB data per
> node in 6 node cluster), it is running since last 3 days.
>
Cassandra Reaper used to
On Sat, Feb 29, 2020 at 8:49 AM Jeff Jirsa wrote:
> If you’re really really advanced you MIGHT be able to use spark +
> cqlsstablewriter to create a ton of sstables with just tombstones one them
> representing deletes, then either nodetool refresh or sstableloader them
> into the cluster
>
> If
On Fri, 28 Feb 2020, 23:02 Erick Ramirez,
wrote:
> I'm not personally aware of anyone who is using it successfully other
> than ProtectWise where it was a good fit for their narrow use case. My
> limited knowledge of it is that it has some sharp edges which is the reason
> they haven't pushed
Hi,
We have a task where we would need to remove roughly 25% of the data from
the SSTables that shouldn't be there anymore.
The rows to be removed can be identified by the first component of its
partitioning key, e.g. remove all rows where type='A' or type='B'. The 2nd
component can be anything
On Fri, Feb 21, 2020 at 2:12 PM Deepak Sharma
wrote:
>
> We have a use case where we need to have two separate PreparedStatement
> objects (one with RetryPolicy and the other without any retry policy) for
> the same query string. And when we try to create two separate
> PreparedStatements, we
On Thu, Jan 16, 2020 at 3:18 PM Laxmikant Upadhyay
wrote:
>
> You are right, that will solve the problem. but unfortunately i won't be
> able to meet my sla with write each quorum . I am using local quorum for
> both read and write.
> Any other way ?
>
Is you read SLO more sensitive than write
On Thu, Jan 16, 2020 at 1:04 PM Laxmikant Upadhyay
wrote:
> Hi,
> What I meant fromActive/standby model is that even though data is being
> replicated (asynchronously) to standby DC , client will only access the
> data from active DC (let's say using local_quorum).
>
> you have "to switch" your
On Tue, Oct 29, 2019 at 9:34 PM Rahul Reddy
wrote:
>
> We have our infrastructure on aws and we use ebs storage . And aws was
> retiring on of the node. Since our storage was persistent we did nodetool
> drain and stopped and start the instance . This caused 500 errors in the
> service. We have
On Tue, Oct 22, 2019 at 12:47 PM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
>
>
> using 2.1.8, 3 nodes (m4.10xlarge, ESB SSD-based), vnodes=256, RF=3, we
> are trying to add a 4th node.
>
>
>
> The two options to my knowledge, mainly affecting throughput, namely
> stream
On Fri, Sep 27, 2019 at 7:39 PM Carl Mueller
wrote:
> So IF that delegate class would work:
>
> 1) create jar with the delegate class
> 2) deploy jar along with upgrade on node
> 3) once all nodes are upgraded, issue ALTER to change to the
> org.apache.cassandra TWCS class.
>
Yes, this used to
On Sun, Sep 29, 2019 at 9:42 AM DuyHai Doan wrote:
> Thanks Jeff for sharing the ideas. I have some question though:
>
> - CQLSSTableWriter and explicitly break between windows --> Even if
> you break between windows, If we have data worth of 1 years it would
> requires us to use
On Fri, Sep 6, 2019 at 11:00 AM wrote:
> - reads => as much as possible - huge stream of requests
> - data => 186GB on each node
> - the reads are unpredictable
> - there's (in the cluster) about 6 billions of records
>
I wonder though, if it makes sense to use Cassandra for a read-only
On Thu, Sep 5, 2019 at 11:19 AM Federico Razzoli <
federico.razzoli@gmail.com> wrote:
>
> Are you using DateTieredCompactionStrategy? It optimises the deletion of
> expired data from disks.
> If minor compactions are not solving the problem, I suggest to run
> nodetool compact.
>
Sorry, but
On Wed, Aug 21, 2019 at 1:47 PM Jeff Jirsa wrote:
> Wild ass guess based on what worked for my previous employer (for whom I
> wrote twcs), but note that if you model it such that partitions are
> contained to a single window/sstable that number is probably irrelevant and
> may be able to be
Hi,
In the TWCS readme[1] it says:
You should target fewer than 50 buckets per table based on your TTL.
What is this recommendation based upon? The docs from Datastax[2], for
example do not mention anything like that.
Is the recommendation still actual today for the version 3.0.$latest?
On Tue, Aug 13, 2019 at 6:14 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> I was wondering about this again, as I've noticed one of the nodes in our
> cluster accumulating ten times the number of files compared to the average
> across the rest of cluster.
On Thu, Mar 14, 2019 at 9:55 PM Jonathan Haddad wrote:
> My coworker Alex (from The Last Pickle) wrote an in depth blog post on
> TWCS. We recommend not running repair on tables that use TWCS.
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
>
Hi,
I was wondering about this again,
On Mon, Aug 5, 2019 at 8:50 AM nokia ceph wrote:
> Hi Community,
>
> I am using Cassanadra 3.0.13 . 5 node cluster simple topology. Following
> are the timeout parameters in yaml file:
>
> # grep timeout /etc/cassandra/conf/cassandra.yaml
> cas_contention_timeout_in_ms: 1000
>
>
> > 3. what is the best strategy under this senario?
>
> Go to RF=3 or read and write at quorum so you’re doing 3/4 instead of 2/2
Jeff, did you mean "2/3 vs. 2/2"?
-Alex
On Wed, Jul 31, 2019 at 7:10 AM Martin Xue wrote:
> Hello,
>
> Good day. This is Martin.
>
> Can someone help me with the following query regarding Cassandra repair
> and compaction?
>
Martin,
This blog post from The Last Pickle provides an in-depth explanation as
well as some practical
On Tue, Jul 30, 2019 at 12:34 PM Vlad wrote:
> Restarting Cassandra helped.
>
But for how long?..
--
Alex
On Tue, Jul 30, 2019 at 12:11 PM Rhys Campbell
wrote:
> Are you sure it says to use assassinate as the first resort? Definately
> not the case
>
It does. I think the reason is that it says that you should run a full
repair first, and before that--stop writing to the DC being
decommissioned.
On Mon, Jul 29, 2019 at 1:21 PM Rahul Reddy
wrote:
>
> Decommissioned 2 nodes from cluster nodetool status doesn't list the
> nodes as expected but jmx metrics shows still those 2 nodes has down.
> Nodetool gossip shows the 2 nodes in Left state. Why does my jmx still
> shows those nodes down
On Tue, Jul 16, 2019 at 5:54 PM Carl Mueller
wrote:
> stays consistently in the 40-60 range, but only recent tables are being
> compacted.
>
I would be alarmed at this point. It definitely feels like not aggressive
enough compaction: can you relax the throttling or afford to have more
On Mon, Jul 15, 2019 at 6:20 PM Carl Mueller
wrote:
> Related to our overstreaming, we have a cluster of about 25 nodes, with
> most at about 1000 sstable files (Data + others).
>
> And about four that are at 20,000 - 30,000 sstable files (Data+Index+etc).
>
> We have vertically scaled the
On Thu, Jul 11, 2019 at 5:04 PM Voytek Jarnot
wrote:
> My google-fu is failing me this morning. I'm looking for any tips on
> splitting a 2 DC cluster into two separate clusters. I see a lot of docs
> about decomissioning a datacenter, but not much in the way of disconnecting
> datacenters into
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote:
>
> On the 2nd question, would you like to tell me how to change a
> write's and a read's consistency level separately in cqlsh?
>
Not that I know of special syntax for that, but you may add an explicit
"CONSISTENCY " command before every
On Sat, Jun 29, 2019 at 5:49 AM Jeff Jirsa wrote:
> If you’re at RF= 3 and read/write at quorum, you’ll have full visibility
> of all data if you switch to RF=4 and continue reading at quorum because
> quorum if 4 is 3, so you’re guaranteed to overlap with at least one of the
> two nodes that
On Fri, Jun 28, 2019 at 11:29 PM Jeff Jirsa wrote:
> you often have to run repair after each increment - going from 3 -> 5
> means 3 -> 4, repair, 4 -> 5 - just going 3 -> 5 will violate consistency
> guarantees, and is technically unsafe.
>
Jeff,
How going from 3 -> 4 is *not violating*
On Fri, Jun 28, 2019 at 3:57 PM Marc Richter wrote:
>
> How is this dealt with in Cassandra? Is setting up firewalls the only
> way to allow only some nodes to connect to the ports 7000/7001?
>
Hi,
You can set
server_encryption_options:
internode_encryption: all
...
and distribute
On Fri, Jun 28, 2019 at 8:37 AM Ayub M wrote:
> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
> The cluster is up and running, took snapshot of the keyspaces volume.
>
> Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another
On Fri, Jun 28, 2019 at 3:14 AM Voytek Jarnot
wrote:
> Curious if anyone could shed some light on this. Trying to set up a
> 4-node, one DC (for now, same region, same AZ, same VPC, etc) cluster in
> AWS.
>
> All nodes have the following config (everything else basically standard):
>
On Fri, Jun 21, 2019 at 7:02 PM Nitan Kainth wrote:
>
> we upgraded binaries from 3.0 to 4.0.
>
Where did you get the binaries for 4.0? It is not released officially yet,
so I guess you were using SVN trunk? Or was there a pre-release?
we run major compaction periodically for some valid
On Tue, Jun 18, 2019 at 8:06 AM ANEESH KUMAR K.M wrote:
>
> I am using Cassandra cluster with 3 nodes which is hosted on AWS. Also we
> have NodeJS web Application which is on AWS ELB. Now the issue is that,
> when I add 2 or more servers (nodeJS) in AWS ELB then the delete queries
> are not
On Tue, Jun 18, 2019 at 3:08 AM Jeff Jirsa wrote:
> Yes - the incomplete sstable will be deleted during startup (in 3.0 and
> newer there’s a transaction log of each compaction in progress - that gets
> cleaned during the startup process)
>
Wait, does that mean that in pre-3.0 versions one can
On Mon, Jun 17, 2019 at 9:30 AM Anurag Sharma
wrote:
>
> We are upgrading Cassandra from 1.25 to 3.X. Just curious if there is any
> recommended open source utility for the same.
>
Hi,
The "recommended open source utility" is the Apache Cassandra itself. ;-)
Given the huge difference between
On Sat, Jun 15, 2019 at 4:31 PM Nimbus Lin wrote:
> Dear cassandra's pioneers:
> I am a 5 years' newbie, it is until now that I have time to use
> cassandra. but I cann't check cassandra's high availabily when I stop a
> seed node or none seed DN as CGE or Greenplum.
> Would someone can
On Thu, Jun 13, 2019 at 3:41 PM Jeff Jirsa wrote:
>
> Bootstrapping a new node does not require repairs at all.
>
Was my understanding as well.
Replacing a node only requires repairs to guarantee consistency to avoid
> violating quorum because streaming for bootstrap only streams from one
>
On Thu, Jun 13, 2019 at 3:16 PM Jeff Jirsa wrote:
> On Jun 13, 2019, at 2:52 AM, Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
> On Wed, Jun 12, 2019 at 4:02 PM Jeff Jirsa wrote:
>
> To avoid violating consistency guarantees, you have to repair the replicas
&
On Thu, Jun 13, 2019 at 2:09 PM Léo FERLIN SUTTON
wrote:
> Last, but not least: are you using the default number of vnodes, 256? The
>> overhead of large number of vnodes (times the number of nodes), can be
>> quite significant. We've seen major improvements in repair runtime after
>>
On Thu, Jun 13, 2019 at 2:07 PM Léo FERLIN SUTTON
wrote:
>
> Overall we are talking about a 1.08TB table, using LCS.
>
> SSTable count: 1047
>> SSTables in each level: [15/4, 10, 103/100, 918, 0, 0, 0, 0, 0]
>
> SSTable Compression Ratio: 0.5192269874287099
>
> Number of partitions (estimate):
On Thu, Jun 13, 2019 at 11:28 AM Léo FERLIN SUTTON
wrote:
>
> ## Cassandra configuration :
> 4 concurrent_compactors
> Current compaction throughput: 150 MB/s
> Concurrent reads/write are both set to 128.
>
> I have also temporarily stopped every repair operations.
>
> Any ideas about how I can
On Thu, Jun 13, 2019 at 10:36 AM R. T.
wrote:
>
> Well, actually by running cfstats I can see that the totaldiskspaceused is
> about ~ 1.2 TB per node in the DC1 and ~ 1 TB per node in DC2. DC2 was off
> for a while thats why there is a difference in space.
>
> I am using Cassandra 3.0.6 and
>
On Wed, Jun 12, 2019 at 4:02 PM Jeff Jirsa wrote:
> To avoid violating consistency guarantees, you have to repair the replicas
> while the lost node is down
>
How do you suggest to trigger it? Potentially replicas of the primary
range for the down node are all over the local DC, so I would go
On Wed, Apr 3, 2019 at 4:23 PM David Taylor wrote:
>
> $ nodetest status
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at
> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>
Could it be that your user doesn't have
On Wed, Apr 3, 2019 at 4:37 PM Saleil Bhat (BLOOMBERG/ 731 LEX) <
sbha...@bloomberg.net> wrote:
>
> Thanks for the reply! One clarification: the replacement node WOULD be
> DC-local as far as Cassandra is is concerned; it would just be in a
> different physical DC. Using the Orlando -> Tampa
On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX) <
sbha...@bloomberg.net> wrote:
>
> The standard procedure for doing this seems to be add a 3rd datacenter to
> the cluster, stream data to the new datacenter via nodetool rebuild, then
> decommission the old datacenter. A more
Hello,
We've just noticed that we cannot install older minor releases of Apache
Cassandra from Debian packages, as described on this page:
http://cassandra.apache.org/download/
Previously we were doing the following at the last step: apt-get install
cassandra==3.0.17
Today it fails with error:
On Wed, Mar 27, 2019 at 6:36 PM Carl Mueller
wrote:
>
> EIPs per the aws experts cost money,
>
>From what I know they only cost you when you're not using them. This page
shows that you are also charged if you remap them too often (more then 100
times a month), this I didn't realize:
On Tue, Mar 26, 2019 at 10:28 PM Carl Mueller
wrote:
> - the AWS people say EIPs are a PITA.
>
Why?
> - if we hardcode the global IPs in the yaml, then yaml editing is required
> for the occaisional hard instance reboot in aws and its attendant global ip
> reassignment
> - if we try leaving
On Tue, Mar 26, 2019 at 5:49 PM Carl Mueller
wrote:
> Looking at the code it appears it shouldn't matter what we set the yaml
> params to. The Ec2MultiRegionSnitch should be using the aws metadata
> 169.254.169.254 to pick up the internal/external ips as needed.
>
This is somehow my expectation
On Mon, Mar 25, 2019 at 11:13 PM Carl Mueller
wrote:
>
> Since the internal IPs are given when the client app connects to the
> cluster, the client app cannot communicate with other nodes in other
> datacenters.
>
Why should it? The client should only connect to its local data center and
leave
On Tue, Mar 5, 2019 at 2:24 PM Jeff Jirsa wrote:
> Ec2 multi should work fine in one region, but consider using
> GossipingPropertyFileSnitch if there’s even a chance you’ll want something
> other than AWS regions as dc names - multicloud, hybrid, analytics DCs, etc
>
For the record, DC names
On Wed, Feb 27, 2019 at 3:11 PM Durity, Sean R
wrote:
> We use the PropertyFileSnitch precisely because it is the same on every
> node. If each node has to have a different file (for GPFS) – deployment is
> more complicated. (And for any automated configuration you would have a
> list of hosts
On Wed, Feb 27, 2019 at 4:15 AM wxn...@zjqunshuo.com
wrote:
> >After restart with the new address the server will notice it and log a
> warning, but it will keep token ownership as long as it keeps the old host
> id (meaning it must use the same data directory as before restart).
>
> Based on my
On Tue, Feb 26, 2019 at 3:26 PM Durity, Sean R
wrote:
> This has not been my experience. Changing IP address is one of the worst
> admin tasks for Cassandra. System.peers and other information on each nodes
> is stored by ip address. And gossip is really good at sending around the
> old
On Tue, Feb 26, 2019 at 9:39 AM wxn...@zjqunshuo.com
wrote:
>
> I'm running 2.2.8 with vnodes and I'm planning to change node IP address.
> My procedure is:
> Turn down one node, setting auto_bootstrap to false in yaml file, then
> bring it up with -Dcassandra.replace_address. Repeat the
On Thu, Feb 14, 2019 at 4:39 PM Jeff Jirsa wrote:
>
> Wait, doesn't cleanup just rewrite every SSTable one by one? Why would
compaction strategy matter? Do you mean that after cleanup STCS may pick
some resulting tables to re-compact them due to the min/max size
difference, which would not be
On Wed, Feb 13, 2019 at 6:47 PM Jeff Jirsa wrote:
> Depending on how bad data resurrection is, you should run it for any host
> that loses a range. In vnodes, that's usually all hosts.
>
> Cleanup with LCS is very cheap. Cleanup with STCS/TWCS is a bit more work.
>
Wait, doesn't cleanup just
On Wed, Feb 13, 2019 at 4:40 PM Jeff Jirsa wrote:
> Some people who add new hosts rebalance the ring afterward - that
> rebalancing can look a lot like a shrink.
>
You mean by moving the tokens? That's only possible if one is not using
vnodes, correct?
I also believe, but don’t have time to
On Wed, Feb 13, 2019 at 5:31 AM Jeff Jirsa wrote:
> The most likely result of not running cleanup is wasted disk space.
>
> The second most likely result is resurrecting deleted data if you do a
> second range movement (expansion, shrink, etc).
>
> If this is bad for you, you should run cleanup
On Tue, Feb 12, 2019 at 7:02 PM Michael Shuler
wrote:
> If you are not using the logback SocketServer and ServerSocketReceiver
> components, the CVE doesn't affect your server with logback 1.1.3.
>
So the idea is that as long as logback.xml doesn't configure any of the
above, we are fine with
Hi,
The latest release notes for all versions mention that logback < 1.2.0 is
subject to CVE-2017-5929 and that the logback version is not upgraded.
E.g:
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-3.0.18
Indeed, when installing 3.0.18 from
On Mon, Jan 7, 2019 at 3:37 PM Jonathan Ballet wrote:
>
> I'm working on how we could improve the upgrades of our servers and how to
> replace them completely (new instance with a new IP address).
> What I would like to do is to replace the machines holding our current
> seeds (#1 and #2 at the
On Fri, Dec 7, 2018 at 12:43 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> After a fresh JVM start the memory allocation looks roughly like this:
>
> total used free sharedbuffers cached
> Mem: 14G14G
On Mon, Dec 17, 2018 at 11:44 AM Riccardo Ferrari
wrote:
> I am having "the same" issue.
> One of my nodes seems to have some hardware struggle, out of 6 nodes (same
> instance size) this one is likely to be makred down, it consntantly
> compacting, high system load, it's just a big pain.
>
> My
On Mon, Dec 10, 2018 at 3:23 PM Riccardo Ferrari wrote:
>
> By "small" I mean that currently I have a 6x m1.xlarge instances running
> Cassandra 3.0.17. Total amount of data is around 1.5TB spread across couple
> of keypaces wih RF:3.
>
> Over time few things happened/became clear including:
>
>
On Mon, Dec 10, 2018 at 12:20 PM Riccardo Ferrari
wrote:
> I am wondering what instance type is best for a small cassandra cluster on
> AWS.
>
Define "small" :-D
> Actually I'd like to compare, or have your opinion about the following
> instances:
>
>- r5*d*.xlarge (4vCPU, *19*ecu, 32GB
On Thu, Dec 6, 2018 at 3:39 PM Riccardo Ferrari wrote:
> To be honest I've never seen the OOM in action on those instances. My Xmx
> was 8GB just like yours and that let me think you have some process that is
> competing for memory, is it? Do you have any cron, any backup, anything
> that can
On Thu, Dec 6, 2018 at 11:14 AM Riccardo Ferrari wrote:
>
> I had few instances in the past that were showing that unresponsivveness
> behaviour. Back then I saw with iotop/htop/dstat ... the system was stuck
> on a single thread processing (full throttle) for seconds. According to
> iotop that
On Wed, 5 Dec 2018, 19:34 Riccardo Ferrari Hi Alex,
>
> I saw that behaviout in the past.
>
Riccardo,
Thank you for the reply!
Do you refer to kswapd issue only or have you observed more problems that
match behavior I have described?
I can tell you the kswapd0 usage is connected to the
On Wed, 5 Dec 2018, 19:53 Jonathan Haddad Seeing high kswapd usage means there's a lot of churn in the page cache.
> It doesn't mean you're using swap, it means the box is spending time
> clearing pages out of the page cache to make room for the stuff you're
> reading now.
>
Jon,
Thanks for
Hello,
We are running the following setup on AWS EC2:
Host system (AWS AMI): Ubuntu 14.04.4 LTS,
Linux 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5
08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Cassandra process runs inside a docker container.
Docker image is based on Ubuntu 18.04.1
On Mon, Dec 3, 2018 at 4:24 PM Oliver Herrmann
wrote:
>
> You are right. The number of nodes in our cluster is equal to the
> replication factor. For that reason I think it should be sufficient to call
> sstableloader only from one node.
>
The next question is then: do you care about
On Fri, 30 Nov 2018, 17:54 Oliver Herrmann When using nodetool refresh I must have write access to the data folder
> and I have to do it on every node. In our production environment the user
> that would do the restore does not have write access to the data folder.
>
OK, not entirely sure that's
On Fri, Nov 30, 2018 at 5:13 PM Oliver Herrmann
wrote:
>
> I'm always getting the message "Skipping file mc-11-big-Data.db: table
> snapshots.table3 doesn't exist". I also tried to rename the snapshots
> folder into the keyspace name (cass_testapp) but then I get the message
> "Skipping file
On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk wrote:
>
> We have recently met a problem when we added 60 nodes in 1 region to the
> cluster
> and set an RF=60 for the system_auth ks, following this documentation
> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
>
On Thu, Nov 8, 2018 at 10:42 PM Yuji Ito wrote:
>
> We are working on Jepsen testing for Cassandra.
> https://github.com/scalar-labs/jepsen/tree/cassandra/cassandra
>
> As you may know, Jepsen is a framework for distributed systems
> verification.
> It can inject network failure and so on and
On Sat, Nov 3, 2018 at 1:13 AM Brian Spindler
wrote:
> That wasn't horrible at all. After testing, provided all goes well I can
> submit this back to the main TWCS repo if you think it's worth it.
>
> Either way do you mind just reviewing briefly for obvious mistakes?
>
>
>
On Fri, Nov 2, 2018 at 5:15 PM Lou DeGenaro wrote:
> I'm looking to hear how others are coping with snapshots.
>
> According to the doc:
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupDeleteSnapshot.html
>
> *When taking a snapshot, previous snapshot files are not
On Fri, Oct 19, 2018 at 11:04 AM Jeff Jirsa wrote:
>
> I’m mobile and can’t check but it’s this JIRA
>
> https://issues.apache.org/jira/browse/CASSANDRA-6434
>
> And it may be a table level prop, I suppose. Again, I’m not in a position
> to confirm.
>
Indeed, it's called
On Fri, Oct 19, 2018 at 10:23 AM Jeff Jirsa wrote:
> It depends on your yaml settings - in newer versions you can have
> cassandra only purge repaired tombstones (and ttl’d data is a tombstone)
>
Interesting. Which setting is that? Is it 4.0 or 3.x -- I couldn't find
anything similar in the
On Mon, Oct 1, 2018 at 12:18 PM onmstester onmstester
wrote:
>
> What if instead of running that python and having one node with non-vnode
> config, i remove the first seed node and re-add it after cluster was fully
> up ? so the token ranges of first seed node would also be assigned by
>
On Thu, Sep 27, 2018 at 2:24 AM Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
>
> Most of the things look ok from your setup.
>
> You can enable Debug logs for repair duration.
> This will help identify if you are hitting a bug or other cause of unusual
> behaviour.
>
> Just a remote
On Wed, Sep 26, 2018 at 1:07 PM Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
>
> Looking at information you have provided, the increased CPU utilisation
> could be because of repair running on the node.
> Repairs are resource intensive operations.
>
> Restarting the node should have
Hello,
On our production cluster of 30 Apache Cassandra 3.0.17 nodes we have
observed that only one node started to show about 2 times the CPU
utilization as compared to the rest (see screenshot): up to 30% vs. ~15% on
average for the other nodes.
This started more or less immediately after
On Mon, 24 Sep 2018, 13:08 Jeff Jirsa, wrote:
> The data structure used to know if data needs to be streamed (the merkle
> tree) is only granular to - at best - a token, so even with subrange repair
> if a byte is off, it’ll stream the whole partition, including parts of old
> repaired sstables
On Mon, Sep 24, 2018 at 10:50 AM Jeff Jirsa wrote:
> Do your partitions span time windows?
Yes.
--
Alex
Hello,
Our setup is as follows:
Apache Cassandra: 3.0.17
Cassandra Reaper: 1.3.0-BETA-20180830
Compaction: {
'class': 'TimeWindowCompactionStrategy',
'compaction_window_size': '30',
'compaction_window_unit': 'DAYS'
}
We have two column families which differ only in the
1 - 100 of 212 matches
Mail list logo