RE: Check out new features in K8ssandra and Mission Control

2024-02-28 Thread Durity, Sean R via user
The k8ssandra requirement is a major blocker.


Sean R. Durity


INTERNAL USE
From: Christopher Bradford 
Sent: Tuesday, February 27, 2024 9:49 PM
To: user@cassandra.apache.org
Cc: Christopher Bradford 
Subject: [EXTERNAL] Re: Check out new features in K8ssandra and Mission Control

Hey Jon, * What aspects of Mission Control are dependent on using K8ssandra? 
Mission Control bundles in K8ssandra for the core automation workflows 
(lifecycle management, cluster operations, medusa &. reaper). In fact we 
include the K8ssandraSpec

Hey Jon,

* What aspects of Mission Control are dependent on using K8ssandra?

Mission Control bundles in K8ssandra for the core automation workflows 
(lifecycle management, cluster operations, medusa &. reaper). In fact we 
include the K8ssandraSpec in the top-level MissionControlCluster resource 
verbatim.

 * Can Mission Control work without K8ssandra?

Not at this time, K8ssandra powers a significant portion of the C* side of the 
stack. Mission Control provides additional functionality (web interface, 
certificate coordination, observability stack, etc) and applies some 
conventions to how K8ssandra objects are created / templated out, but the 
actually K8ssandra operator present in MC is the same one available via the 
Helm charts.

* Is mission control open source?

Not at this time. While the majority of the Kubernetes operators are open 
source as part of K8ssandra, there are some pieces which are closed source. I 
expect some of the components may move from closed source into K8ssandra over 
time.

* I'm not familiar with Vector - does it require an agent?

Vector 
[vector.dev]
 is a pretty neat project. We run a few of their components as part of the 
stack. There is a DaemonSet which runs on each worker to collect host level 
metrics and scrape logs being emitted by containers, a sidecar for collecting 
logs from the C* container, and an aggregator which performs some filtering and 
transformation before pushing to an object store.

* Is Reaper deployed separately or integrated in?

Reaper is deployed as part of the cluster creation workflow. It is spun up and 
configured to connect to the cluster automatically.

~Chris

Christopher Bradford



On Tue, Feb 27, 2024 at 6:55 PM Jon Haddad 
mailto:j...@jonhaddad.com>> wrote:
Hey Chris - this looks pretty interesting!  It looks like there's a lot of 
functionality in here.

* What aspects of Mission Control are dependent on using K8ssandra?
* Can Mission Control work without K8ssandra?
* Is mission control open source?
* I'm not familiar with Vector - does it require an agent?
* Is Reaper deployed separately or integrated in?

Thanks!  Looking forward to trying this out.
Jon


On Tue, Feb 27, 2024 at 7:07 AM Christopher Bradford 
mailto:bradfor...@gmail.com>> wrote:

Hey C* folks,


I'm excited to share that the DataStax team has just released Mission Control 
[datastax.com],
 a new operations platform for running Apache Cassandra and DataStax 
Enterprise. Built around the open source core of K8ssandra 
[k8ssandra.io]
 we've been hard at work expanding multi-region capabilities. If you haven't 
seen some of the new features coming in here are some highlights:


  *   Management API support in Reaper - no more JMX credentials, YAY
  *   Additional support for TLS across the stack- including operator to node, 
Reaper to management API, etc
  *   Updated metrics pipeline - removal of collectd from nodes, Vector for 
monitoring log files (goodbye tail -f)
  *   Deterministic node selection for cluster operations
  *   Top-level management tasks in the control plane (no more forced 
connections to data planes to trigger a restart)


On top of this Mission Control offers:


  *   A single web-interface to monitor and manage your clusters wherever 
they're deployed
  *   Automatic management of internode and operator to node certificates - 
this includes integration with third party CAs and rotation of all 
certificates, keys, and various Java stores
  *   Centralized metrics and logs aggregation, querying and storage with the 
capability to split the pipeline allowing for exporting of streams to other 
observability tools within your environment
  *   Per-node configuration (this is an edge case, but still something we 
wanted to make possible)


While building our Mission Control, K8ssandra has seen a number of releases 
with quite a few contributions from the community. From Helm chart updates to 
operator tweaks we want to send out a huge 

Re: stress testing & lab provisioning tools

2024-02-28 Thread Alexander DEJANOVSKI
Hey Jon,

It's awesome to see that you're reviving both these projects!

I was eager to get my hands on an updated version of tlp-cluster with up to
date AMIs 
tlp-stress is by far the best Cassandra stress tool I've worked with, and I
recommend everyone to test easy-cass-stress and build additional workload
types.

Looking forward to testing these new forks.

Alex

Le mar. 27 févr. 2024, 02:00, Jon Haddad  a écrit :

> Hey everyone,
>
> Over the last several months I've put a lot of work into 2 projects I
> started back at The Last Pickle, for stress testing Cassandra and for
> building labs in AWS.  You may know them as tlp-stress and tlp-cluster.
>
> Since I haven't worked at TLP in almost half a decade, and am the primary
> / sole person investing time, I've rebranded them to easy-cass-stress and
> easy-cass-lab.  There's been several major improvements in both projects
> and I invite you to take a look at both of them.
>
> easy-cass-stress
>
> Many of you are familiar with tlp-stress.  easy-cass-stress is a fork /
> rebrand of the project that uses almost the same familiar interface as
> tlp-stress, but with some improvements.  easy-cass-stress is even easier to
> use, requiring less guessing to the parameters to help you figure out your
> performance profile.  Instead of providing a -c flag (for in-flight
> concurrency) you can now simply provide your max read and write latencies
> and it'll figure out the throughput it can get on its own or used fixed
> rate scheduling like many other benchmarking tools have.  The adaptive
> scheduling is based on a Netflix Tech Blog post, but slightly modified to
> be sensitive to latency metrics instead of just errors.   You can read more
> about some of my changes here:
> https://rustyrazorblade.com/post/2023/2023-10-31-tlp-stress-adaptive-scheduler/
>
> GH repo: https://github.com/rustyrazorblade/easy-cass-stress
>
> easy-cass-lab
>
> This is a powerful tool that makes it much easier to spin up lab
> environments using any released version of Cassandra, with functionality
> coming to test custom branches and trunk.  It's a departure from the old
> tlp-cluster that installed and configured everything at runtime.  By
> creating a universal, multi-version AMI complete with all my favorite
> debugging tools, it's now possible to create a lab environment in under 2
> minutes in AWS.  The image includes easy-cass-stress making it
> straightforward to spin up clusters to test existing releases, and soon
> custom builds and trunk.  Fellow committer Jordan West has been working on
> this with me and we've made a ton of progress over the last several weeks.
>  For a demo check out my working session live stream last week where I
> fixed a few issues and discussed the potential and development path for the
> tool: https://youtu.be/dPtsBut7_MM
>
> GH repo: https://github.com/rustyrazorblade/easy-cass-lab
>
> I hope you find these tools as useful as I have.  I am aware of many
> extremely large Cassandra teams using tlp-stress with their 1K+ node
> environments, and hope the additional functionality in easy-cass-stress
> makes it easier for folks to start benchmarking C*, possibly in conjunction
> with easy-cass-lab.
>
> Looking forward to hearing your feedback,
> Jon
>