Comments inline 

> On Feb 18, 2018, at 9:39 PM, Kenneth Brotman <> 
> wrote:
> Cassandra feels like an unfinished program to me. The problem is not that 
> it’s open source or cutting edge.  It’s an open source cutting edge program 
> that lacks some of its basic functionality.  We are all stuck addressing 
> fundamental mechanical tasks for Cassandra because the basic code that would 
> do that part has not been contributed yet.
There’s probably 2-3 reasons why here:

1) Historically the pmc has tried to keep the scope of the project very narrow. 
It’s a database. We don’t ship drivers. We don’t ship developer tools. We don’t 
ship fancy UIs. We ship a database. I think for the most part the narrow vision 
has been for the best, but maybe it’s time to reconsider some of the scope. 

Postgres will autovacuum to prevent wraparound (hopefully),  but everyone I 
know running Postgres uses flexible-freeze in cron - sometimes it’s ok to let 
the database have its opinions and let third party tools fill in the gaps.

2) Cassandra is, by definition, a database for large scale problems. Most of 
the companies working on/with it tend to be big companies. Big companies often 
have pre-existing automation that solved the stuff you consider fundamental 
tasks, so there’s probably nobody actively working on the solved problems that 
you may consider missing features - for many people they’re already solved.

3) It’s not nearly as basic as you think it is. Datastax seemingly had a 
multi-person team on opscenter, and while it was better than anything else 
around last time I used it (before it stopped supporting the OSS version), it 
left a lot to be desired. It’s probably 2-3 engineers working for a month  to 
have any sort of meaningful, reliable, mostly trivial cluster-managing UI, and 
I can think of about 10 JIRAs I’d rather see that time be spent on first. 

> Ease of use issues need to be given much more attention.  For an 
> administrator, the ease of use of Cassandra is very poor. 
> Furthermore, currently Cassandra is an idiot.  We have to do everything for 
> Cassandra. Contrast that with the fact that we are in the dawn of artificial 
> intelligence.

And for everything you think is obvious, there’s a 50% chance someone else will 
have already solved differently, and your obvious new solution will be seen as 
an inconvenient assumption and complexity they won’t appreciate. Open source 
projects get to walk a fine line of trying to be useful without making too many 
assumptions, being “too” opinionated, or overstepping bounds. We may be too 
conservative, but it’s very easy to go too far in the opposite direction. 

> Software exists to automate tasks for humans, not mechanize humans to 
> administer tasks for a database.  I’m an engineering type.  My job is to 
> apply science and technology to solve real world problems.  And that’s where 
> I need an organization’s I.T. talent to focus; not in crank starting an 
> unfinished database.

And that’s why nobody’s done it - we all have bigger problems we’re being paid 
to solve, and nobody’s felt it necessary. Because it’s not necessary, it’s 
nice, but not required.

> For example, I should be able to go to any node, replace the Cassandra.yaml 
> file and have a prompt on the display ask me if I want to update all the yaml 
> files across the cluster.  I shouldn’t have to manually modify yaml files on 
> each node or have to create a script for some third party automation tool to 
> do it. 
I don’t see this ever happening.  Your config management already pushes files 
around your infrastructure, Cassandra doesn’t need to do it. 

> I should not have to turn off service, clear directories, restart service in 
> coordination with the other nodes.  It’s already a computer system.  It can 
> do those things on its own.

The only time you should be doing this is when you’re wiping nodes from failed 
bootstrap, and that stopped being required in 2.2.
> How about read repair.  First there is something wrong with the name.  Maybe 
> it should be called Consistency Repair.  An administrator shouldn’t have to 
> do anything.  It should be a behavior of Cassandra that is programmed in. It 
> should consider the GC setting of each node, calculate how often it has to 
> run repair, when it should run it so all the nodes aren’t trying at the same 
> time and when other circumstances indicate it should also run it.
There’s a good argument to be made that something like Reaper should be shipped 
with Cassandra. There’s another good argument that most tools like this end up 
needing some sort of leader election for scheduling and that goes against a lot 
of the fundamental assumptions in Cassandra (all nodes are equal, etc) - 
solving that problem is probably at least part of why you haven’t seen them 
built into the db. “Leader election is easy” you’ll say, and I’ll laugh and 
tell you about users I know who have DCs go offline for weeks at a time. 

> Certificate management should be automated.
Stefan (in particular) has done a fair amount of work on this, but I’d bet 90% 
of users don’t use ssl and genuinely don’t care. 

> Cluster wide management should be a big theme in any next major release.
Na. Stability and testing should be a big theme in the next major release.

> What is a major release?  How many major releases could a program have before 
> all the coding for basic stuff like installation, configuration and 
> maintenance is included!
> Finish the basic coding of Cassandra, make it easy to use for administrators, 
> make is smart, add cluster wide management.  Keep Cassandra competitive or it 
> will soon be the old Model T we all remember fondly.

Let’s keep some perspective. Most of us came to Cassandra from rdbms worlds 
where we were building solutions out of a bunch of master/slave MySQL / 
Postgres type databases. I started using Cassandra 0.6 when I needed to store 
something like 400gb/day in 200whatever on spinning disks when 100gb felt like 
a “big” database, and the thought of writing runbooks and automation to 
automatically pick the most up to date slave as the new master, promote it, 
repoint the other slave to the new master, then reformat the old master and add 
it as a new slave without downtime and without potentially deleting the 
company’s whole dataset sounded awful. Cassandra solved that problem, at the 
cost of maintaining a few yaml (then xml) files. Yes there are rough edges - 
they get slightly less rough on each new release. Can we do better? Sure, use 
your engineering time and send some patches. But the basic stuff is the nuts 
and bolts of the database: I care way more about streaming and compaction than 
I’ll ever care about installation. 

> I ask the Committee to compile a list of all such items, make a plan, and 
> commit to including the completed and tested code as part of major release 
> 5.0.  I further ask that release 4.0 not be delayed and then there be an 
> unusually short skip to version 5.0.

The committers are working their ass off on all sorts of hard problems. Some of 
those are probably even related to Cassandra. If you have idea, open a JIRA. If 
you have time, send a patch. Or review a patch. But don’t expect a bunch of 
people to set down work on optimizing the database to work on packaging and 
installation, because there’s no ROI in it for 99% of the existing committers: 
we’re working on the database to solve problems, and installation isn’t one of 
those problems.

Reply via email to