Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-09 Thread Maxim Muzafarov
Jon,

That sounds good.  Let's make these commands rely on the settings
virtual table and keep the initial changes as minimal as possible.

We've also scheduled a Cassandra Contributor Meeting on January 30th
2024, so I'll prepare some slides with everything we've got so far and
try to prepare some drafts to demonstrate the design.
https://cwiki.apache.org/confluence/display/CASSANDRA/Cassandra+Contributor+Meeting

On Tue, 9 Jan 2024 at 00:55, Jon Haddad  wrote:
>
> It's great to see where this is going and thanks for the discussion on the ML.
>
> Personally, I think adding two new ways of accomplishing the same thing is a 
> net negative.  It means we need more documentation and creates 
> inconsistencies across tools and users.  The tradeoffs you've listed are 
> worth considering, but in my opinion adding 2 new ways to accomplish the same 
> thing hurts the project more than it helps.
>
> > - I'd like to see a symmetry between the JMX and CQL APIs, so that users 
> > will have a sense of the commands they are using and are less
> likely to check the documentation;
>
> I've worked with a couple hundred teams and I can only think of a few who use 
> JMX directly.  It's done very rarely.  After 10 years, I still have to look 
> up the JMX syntax to do anything useful, especially if there's any quoting 
> involved.  Power users might know a handful of JMX commands by heart, but I 
> suspect most have a handful of bash scripts they use instead, or have a 
> sidecar.  I also think very few users will migrate their management code from 
> JMX to CQL, nor do I imagine we'll move our own tools until the 
> `disablebinary` problem is solved.
>
> > - It will be easier for us to move the nodetool from the jmx client that is 
> > used under the hood to an implementation based on a java-driver and use the 
> > CQL for the same;
>
> I can't imagine this would make a material difference.  If someone's 
> rewriting a nodetool command, how much time will be spent replacing the JMX 
> call with a CQL one?  Looking up a virtual table isn't going to be what 
> consumes someone's time in this process.  Again, this won't be done without 
> solving `nodetool disablebinary`.
>
> > if we have cassandra-15254 merged, it will cost almost nothing to support 
> > the exec syntax for setting properties;
>
> My concern is more about the weird user experience of having two ways of 
> doing the same thing, less about the technical overhead of adding a second 
> implementation.  I propose we start simple, see if any of the reasons you've 
> listed are actually a real problem, then if they are, address the issue in a 
> follow up.
>
> If I'm wrong, it sounds like it's fairly easy to add `exec` for changing 
> configs.  If I'm right, we'll have two confusing syntaxes forever.  It's a 
> lot easier to add something later than take it away.
>
> How does that sound?
>
> Jon
>
>
>
>
> On Mon, Jan 8, 2024 at 7:55 PM Maxim Muzafarov  wrote:
>>
>> > Some operations will no doubt require a stored procedure syntax, but 
>> > perhaps it would be a good idea to split the work into two:
>>
>> These are exactly the first steps I have in mind:
>>
>> [Ready for review]
>> Allow UPDATE on settings virtual table to change running configurations
>> https://issues.apache.org/jira/browse/CASSANDRA-15254
>>
>> This issue is specifically aimed at changing the configuration
>> properties we are talking about (value is in yaml format):
>> e.g. UPDATE system_views.settings SET compaction_throughput = 128Mb/s;
>>
>> [Ready for review]
>> Expose all table metrics in virtual table
>> https://issues.apache.org/jira/browse/CASSANDRA-14572
>>
>> This is to observe the running configuration and all available metrics:
>> e.g. select * from system_views.thread_pools;
>>
>>
>> I hope both of the issues above will become part of the trunk branch
>> before we move on to the CQL management commands. In this topic, I'd
>> like to discuss the design of the CQL API, and gather feedback, so
>> that I can prepare a draft of changes to look at without any
>> surprises, and that's exactly what this discussion is about.
>>
>>
>> cqlsh> UPDATE system.settings SET compaction_throughput = 128;
>> cqlsh> exec setcompactionthroughput 128
>>
>> I don't mind removing the exec command from the CQL command API which
>> is intended to change settings. Personally, I see the second option as
>> just an alias for the first command, and in fact, they will have the
>> same implementation under the hood, so please consider the rationale
>> below:
>>
>> - I'd like to see a symmetry between the JMX and CQL APIs, so that
>> users will have a sense of the commands they are using and are less
>> likely to check the documentation;
>> - It will be easier for us to move the nodetool from the jmx client
>> that is used under the hood to an implementation based on a
>> java-driver and use the CQL for the same;
>> - if we have cassandra-15254 merged, it will cost almost nothing to
>> support the exec syntax 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-08 Thread Jon Haddad
It's great to see where this is going and thanks for the discussion on the
ML.

Personally, I think adding two new ways of accomplishing the same thing is
a net negative.  It means we need more documentation and creates
inconsistencies across tools and users.  The tradeoffs you've listed are
worth considering, but in my opinion adding 2 new ways to accomplish the
same thing hurts the project more than it helps.

> - I'd like to see a symmetry between the JMX and CQL APIs, so that users
will have a sense of the commands they are using and are less
likely to check the documentation;

I've worked with a couple hundred teams and I can only think of a few who
use JMX directly.  It's done very rarely.  After 10 years, I still have to
look up the JMX syntax to do anything useful, especially if there's any
quoting involved.  Power users might know a handful of JMX commands by
heart, but I suspect most have a handful of bash scripts they use instead,
or have a sidecar.  I also think very few users will migrate their
management code from JMX to CQL, nor do I imagine we'll move our own tools
until the `disablebinary` problem is solved.

> - It will be easier for us to move the nodetool from the jmx client that
is used under the hood to an implementation based on a java-driver and use
the CQL for the same;

I can't imagine this would make a material difference.  If someone's
rewriting a nodetool command, how much time will be spent replacing the JMX
call with a CQL one?  Looking up a virtual table isn't going to be what
consumes someone's time in this process.  Again, this won't be done without
solving `nodetool disablebinary`.

> if we have cassandra-15254 merged, it will cost almost nothing to support
the exec syntax for setting properties;

My concern is more about the weird user experience of having two ways of
doing the same thing, less about the technical overhead of adding a second
implementation.  I propose we start simple, see if any of the reasons
you've listed are actually a real problem, then if they are, address the
issue in a follow up.

If I'm wrong, it sounds like it's fairly easy to add `exec` for changing
configs.  If I'm right, we'll have two confusing syntaxes forever.  It's a
lot easier to add something later than take it away.

How does that sound?

Jon




On Mon, Jan 8, 2024 at 7:55 PM Maxim Muzafarov  wrote:

> > Some operations will no doubt require a stored procedure syntax, but
> perhaps it would be a good idea to split the work into two:
>
> These are exactly the first steps I have in mind:
>
> [Ready for review]
> Allow UPDATE on settings virtual table to change running configurations
> https://issues.apache.org/jira/browse/CASSANDRA-15254
>
> This issue is specifically aimed at changing the configuration
> properties we are talking about (value is in yaml format):
> e.g. UPDATE system_views.settings SET compaction_throughput = 128Mb/s;
>
> [Ready for review]
> Expose all table metrics in virtual table
> https://issues.apache.org/jira/browse/CASSANDRA-14572
>
> This is to observe the running configuration and all available metrics:
> e.g. select * from system_views.thread_pools;
>
>
> I hope both of the issues above will become part of the trunk branch
> before we move on to the CQL management commands. In this topic, I'd
> like to discuss the design of the CQL API, and gather feedback, so
> that I can prepare a draft of changes to look at without any
> surprises, and that's exactly what this discussion is about.
>
>
> cqlsh> UPDATE system.settings SET compaction_throughput = 128;
> cqlsh> exec setcompactionthroughput 128
>
> I don't mind removing the exec command from the CQL command API which
> is intended to change settings. Personally, I see the second option as
> just an alias for the first command, and in fact, they will have the
> same implementation under the hood, so please consider the rationale
> below:
>
> - I'd like to see a symmetry between the JMX and CQL APIs, so that
> users will have a sense of the commands they are using and are less
> likely to check the documentation;
> - It will be easier for us to move the nodetool from the jmx client
> that is used under the hood to an implementation based on a
> java-driver and use the CQL for the same;
> - if we have cassandra-15254 merged, it will cost almost nothing to
> support the exec syntax for setting properties;
>
> On Mon, 8 Jan 2024 at 20:13, Jon Haddad  wrote:
> >
> > Ugh, I moved some stuff around and 2 paragraphs got merged that
> shouldn't have been.
> >
> > I think there's no way we could rip out JMX, there's just too many
> benefits to having it and effectively zero benefits to removing.
> >
> > Regarding disablebinary, part of me wonders if this is a bit of a
> hammer, and what we really want is "disable binary for non-admins".  I'm
> not sure what the best path is to get there.  The local unix socket might
> be the easiest path as it allows us to disable network binary easily and
> still allow local admins, and 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-08 Thread Maxim Muzafarov
> Some operations will no doubt require a stored procedure syntax, but perhaps 
> it would be a good idea to split the work into two:

These are exactly the first steps I have in mind:

[Ready for review]
Allow UPDATE on settings virtual table to change running configurations
https://issues.apache.org/jira/browse/CASSANDRA-15254

This issue is specifically aimed at changing the configuration
properties we are talking about (value is in yaml format):
e.g. UPDATE system_views.settings SET compaction_throughput = 128Mb/s;

[Ready for review]
Expose all table metrics in virtual table
https://issues.apache.org/jira/browse/CASSANDRA-14572

This is to observe the running configuration and all available metrics:
e.g. select * from system_views.thread_pools;


I hope both of the issues above will become part of the trunk branch
before we move on to the CQL management commands. In this topic, I'd
like to discuss the design of the CQL API, and gather feedback, so
that I can prepare a draft of changes to look at without any
surprises, and that's exactly what this discussion is about.


cqlsh> UPDATE system.settings SET compaction_throughput = 128;
cqlsh> exec setcompactionthroughput 128

I don't mind removing the exec command from the CQL command API which
is intended to change settings. Personally, I see the second option as
just an alias for the first command, and in fact, they will have the
same implementation under the hood, so please consider the rationale
below:

- I'd like to see a symmetry between the JMX and CQL APIs, so that
users will have a sense of the commands they are using and are less
likely to check the documentation;
- It will be easier for us to move the nodetool from the jmx client
that is used under the hood to an implementation based on a
java-driver and use the CQL for the same;
- if we have cassandra-15254 merged, it will cost almost nothing to
support the exec syntax for setting properties;

On Mon, 8 Jan 2024 at 20:13, Jon Haddad  wrote:
>
> Ugh, I moved some stuff around and 2 paragraphs got merged that shouldn't 
> have been.
>
> I think there's no way we could rip out JMX, there's just too many benefits 
> to having it and effectively zero benefits to removing.
>
> Regarding disablebinary, part of me wonders if this is a bit of a hammer, and 
> what we really want is "disable binary for non-admins".  I'm not sure what 
> the best path is to get there.  The local unix socket might be the easiest 
> path as it allows us to disable network binary easily and still allow local 
> admins, and allows the OS to reject the incoming connections vs passing that 
> work onto a connection handler which would have to evaluate whether or not 
> the user can connect.  If a node is already in a bad spot requring disable 
> binary, it's probably not a good idea to have it get DDOS'ed as part of the 
> remediation.
>
> Sorry for multiple emails.
>
> Jon
>
> On Mon, Jan 8, 2024 at 4:11 PM Jon Haddad  wrote:
>>
>> > Syntactically, if we’re updating settings like compaction throughput, I 
>> > would prefer to simply update a virtual settings table
>> > e.g. UPDATE system.settings SET compaction_throughput = 128
>>
>> I agree with this, sorry if that wasn't clear in my previous email.
>>
>> > Some operations will no doubt require a stored procedure syntax,
>>
>> The alternative to the stored procedure syntax is to have first class 
>> support for operations like REPAIR or COMPACT, which could be interesting.  
>> It might be a little nicer if the commands are first class citizens. I'm not 
>> sure what the downside would be besides adding complexity to the parser.  I 
>> think I like the idea as it would allow for intuitive tab completion (REPAIR 
>> ) and mentally fit in with the rest of the permission system, and be 
>> fairly obvious what permission relates to what action.
>>
>> cqlsh > GRANT INCREMENTAL REPAIR ON mykeyspace.mytable TO jon;
>>
>> I realize the ability to grant permissions could be done for the stored 
>> procedure syntax as well, but I think it's a bit more consistent to 
>> represent it the same way as DDL and probably better for the end user.
>>
>> Postgres seems to generally do admin stuff with SELECT function(): 
>> https://www.postgresql.org/docs/9.3/functions-admin.html.  It feels a bit 
>> weird to me to use SELECT to do things like kill DB connections, but that 
>> might just be b/c it's not how I typically work with a database.  VACUUM is 
>> a standalone command though.
>>
>> Curious to hear what people's thoughts are on this.
>>
>> > I would like to see us move to decentralised structured settings 
>> > management at the same time, so that we can set properties for the whole 
>> > cluster, or data centres, or individual nodes via the same mechanism - all 
>> > from any node in the cluster. I would be happy to help out with this work, 
>> > if time permits.
>>
>> This would be nice.  Spinnaker has this feature and I found it to be very 
>> valuable at Netflix when making large 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-08 Thread Jon Haddad
Ugh, I moved some stuff around and 2 paragraphs got merged that shouldn't
have been.

I think there's no way we could rip out JMX, there's just too many benefits
to having it and effectively zero benefits to removing.

Regarding disablebinary, part of me wonders if this is a bit of a hammer,
and what we really want is "disable binary for non-admins".  I'm not sure
what the best path is to get there.  The local unix socket might be the
easiest path as it allows us to disable network binary easily and still
allow local admins, and allows the OS to reject the incoming connections vs
passing that work onto a connection handler which would have to evaluate
whether or not the user can connect.  If a node is already in a bad spot
requring disable binary, it's probably not a good idea to have it get
DDOS'ed as part of the remediation.

Sorry for multiple emails.

Jon

On Mon, Jan 8, 2024 at 4:11 PM Jon Haddad  wrote:

> > Syntactically, if we’re updating settings like compaction throughput, I
> would prefer to simply update a virtual settings table
> > e.g. UPDATE system.settings SET compaction_throughput = 128
>
> I agree with this, sorry if that wasn't clear in my previous email.
>
> > Some operations will no doubt require a stored procedure syntax,
>
> The alternative to the stored procedure syntax is to have first class
> support for operations like REPAIR or COMPACT, which could be interesting.
> It might be a little nicer if the commands are first class citizens. I'm
> not sure what the downside would be besides adding complexity to the
> parser.  I think I like the idea as it would allow for intuitive tab
> completion (REPAIR ) and mentally fit in with the rest of the
> permission system, and be fairly obvious what permission relates to what
> action.
>
> cqlsh > GRANT INCREMENTAL REPAIR ON mykeyspace.mytable TO jon;
>
> I realize the ability to grant permissions could be done for the stored
> procedure syntax as well, but I think it's a bit more consistent to
> represent it the same way as DDL and probably better for the end user.
>
> Postgres seems to generally do admin stuff with SELECT function():
> https://www.postgresql.org/docs/9.3/functions-admin.html.  It feels a bit
> weird to me to use SELECT to do things like kill DB connections, but that
> might just be b/c it's not how I typically work with a database.  VACUUM is
> a standalone command though.
>
> Curious to hear what people's thoughts are on this.
>
> > I would like to see us move to decentralised structured settings
> management at the same time, so that we can set properties for the whole
> cluster, or data centres, or individual nodes via the same mechanism - all
> from any node in the cluster. I would be happy to help out with this work,
> if time permits.
>
> This would be nice.  Spinnaker has this feature and I found it to be very
> valuable at Netflix when making large changes.
>
> Regarding JMX - I think since it's about as close as we can get to "free"
> I don't really consider it to be additional overhead, a decent escape
> hatch, and I can't see us removing any functionality that most teams would
> consider critical.
>
> > We need something that's available for use before the node comes fully
> online
> > Supporting backwards compat, especially for automated ops (i.e.
> nodetool, JMX, etc), is crucial. Painful, but crucial.
>
> I think there's no way we could rip out JMX, there's just too many
> benefits to having it and effectively zero benefits to removing.  Part of
> me wonders if this is a bit of a hammer, and what we really want is
> "disable binary for non-admins".  I'm not sure what the best path is to get
> there.  The local unix socket might be the easiest path as it allows us to
> disable network binary easily and still allow local admins, and allows the
> OS to reject the incoming connections vs passing that work onto a
> connection handler which would have to evaluate whether or not the user can
> connect.  If a node is already in a bad spot requring disable binary, it's
> probably not a good idea to have it get DDOS'ed as part of the remediation.
>
> I think it's safe to say there's no appetite to remove JMX, at least not
> for anyone that would have to rework their entire admin control plane, plus
> whatever is out there in OSS provisioning tools like puppet / chef / etc
> that rely on JMX.  I see no value whatsoever in removing it.
>
> I should probably have phrased my earlier email a bit differently.  Maybe
> this is better:
>
> Fundamentally, I think it's better for the project if administration is
> fully supported over CQL in addition to JMX, without introducing a
> redundant third option, with the project's preference being CQL.
>
>
> On Mon, Jan 8, 2024 at 9:10 AM Benedict Elliott Smith 
> wrote:
>
>> Syntactically, if we’re updating settings like compaction throughput, I
>> would prefer to simply update a virtual settings table
>>
>> e.g. UPDATE system.settings SET compaction_throughput = 128
>>
>> Some 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-08 Thread Jon Haddad
> Syntactically, if we’re updating settings like compaction throughput, I
would prefer to simply update a virtual settings table
> e.g. UPDATE system.settings SET compaction_throughput = 128

I agree with this, sorry if that wasn't clear in my previous email.

> Some operations will no doubt require a stored procedure syntax,

The alternative to the stored procedure syntax is to have first class
support for operations like REPAIR or COMPACT, which could be interesting.
It might be a little nicer if the commands are first class citizens. I'm
not sure what the downside would be besides adding complexity to the
parser.  I think I like the idea as it would allow for intuitive tab
completion (REPAIR ) and mentally fit in with the rest of the
permission system, and be fairly obvious what permission relates to what
action.

cqlsh > GRANT INCREMENTAL REPAIR ON mykeyspace.mytable TO jon;

I realize the ability to grant permissions could be done for the stored
procedure syntax as well, but I think it's a bit more consistent to
represent it the same way as DDL and probably better for the end user.

Postgres seems to generally do admin stuff with SELECT function():
https://www.postgresql.org/docs/9.3/functions-admin.html.  It feels a bit
weird to me to use SELECT to do things like kill DB connections, but that
might just be b/c it's not how I typically work with a database.  VACUUM is
a standalone command though.

Curious to hear what people's thoughts are on this.

> I would like to see us move to decentralised structured settings
management at the same time, so that we can set properties for the whole
cluster, or data centres, or individual nodes via the same mechanism - all
from any node in the cluster. I would be happy to help out with this work,
if time permits.

This would be nice.  Spinnaker has this feature and I found it to be very
valuable at Netflix when making large changes.

Regarding JMX - I think since it's about as close as we can get to "free" I
don't really consider it to be additional overhead, a decent escape hatch,
and I can't see us removing any functionality that most teams would
consider critical.

> We need something that's available for use before the node comes fully
online
> Supporting backwards compat, especially for automated ops (i.e. nodetool,
JMX, etc), is crucial. Painful, but crucial.

I think there's no way we could rip out JMX, there's just too many benefits
to having it and effectively zero benefits to removing.  Part of me wonders
if this is a bit of a hammer, and what we really want is "disable binary
for non-admins".  I'm not sure what the best path is to get there.  The
local unix socket might be the easiest path as it allows us to disable
network binary easily and still allow local admins, and allows the OS to
reject the incoming connections vs passing that work onto a connection
handler which would have to evaluate whether or not the user can connect.
If a node is already in a bad spot requring disable binary, it's probably
not a good idea to have it get DDOS'ed as part of the remediation.

I think it's safe to say there's no appetite to remove JMX, at least not
for anyone that would have to rework their entire admin control plane, plus
whatever is out there in OSS provisioning tools like puppet / chef / etc
that rely on JMX.  I see no value whatsoever in removing it.

I should probably have phrased my earlier email a bit differently.  Maybe
this is better:

Fundamentally, I think it's better for the project if administration is
fully supported over CQL in addition to JMX, without introducing a
redundant third option, with the project's preference being CQL.


On Mon, Jan 8, 2024 at 9:10 AM Benedict Elliott Smith 
wrote:

> Syntactically, if we’re updating settings like compaction throughput, I
> would prefer to simply update a virtual settings table
>
> e.g. UPDATE system.settings SET compaction_throughput = 128
>
> Some operations will no doubt require a stored procedure syntax, but
> perhaps it would be a good idea to split the work into two: one part to
> address settings like those above, and another for maintenance operations
> such as triggering major compactions, repair and the like?
>
> I would like to see us move to decentralised structured settings
> management at the same time, so that we can set properties for the whole
> cluster, or data centres, or individual nodes via the same mechanism - all
> from any node in the cluster. I would be happy to help out with this work,
> if time permits.
>
>
> On 8 Jan 2024, at 11:42, Josh McKenzie  wrote:
>
> Fundamentally, I think it's better for the project if administration is
> fully done over CQL and we have a consistent, single way of doing things.
>
> Strongly agree here. With 2 caveats:
>
>1. Supporting backwards compat, especially for automated ops (i.e.
>nodetool, JMX, etc), is crucial. Painful, but crucial.
>2. We need something that's available for use before the node comes
>fully online; the 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-08 Thread Benedict Elliott Smith
Syntactically, if we’re updating settings like compaction throughput, I would 
prefer to simply update a virtual settings table

e.g. UPDATE system.settings SET compaction_throughput = 128

Some operations will no doubt require a stored procedure syntax, but perhaps it 
would be a good idea to split the work into two: one part to address settings 
like those above, and another for maintenance operations such as triggering 
major compactions, repair and the like?

I would like to see us move to decentralised structured settings management at 
the same time, so that we can set properties for the whole cluster, or data 
centres, or individual nodes via the same mechanism - all from any node in the 
cluster. I would be happy to help out with this work, if time permits.


> On 8 Jan 2024, at 11:42, Josh McKenzie  wrote:
> 
>> Fundamentally, I think it's better for the project if administration is 
>> fully done over CQL and we have a consistent, single way of doing things. 
> Strongly agree here. With 2 caveats:
> Supporting backwards compat, especially for automated ops (i.e. nodetool, 
> JMX, etc), is crucial. Painful, but crucial.
> We need something that's available for use before the node comes fully 
> online; the point Jeff always brings up when we discuss moving away from JMX. 
> So long as we have some kind of "out-of-band" access to nodes or 
> accommodation for that, we should be good.
> For context on point 2, see slack: 
> https://the-asf.slack.com/archives/CK23JSY2K/p1688745128122749?thread_ts=1688662169.018449=CK23JSY2K
> 
>> I point out that JMX works before and after the native protocol is running 
>> (startup, shutdown, joining, leaving), and also it's semi-common for us to 
>> disable the native protocol in certain circumstances, so at the very least, 
>> we'd then need to implement a totally different cql protocol interface just 
>> for administration, which nobody has committed to building yet.
> 
> I think this is a solvable problem, and I think the benefits of having a 
> single, elegant way of interacting with a cluster and configuring it 
> justifies the investment for us as a project. Assuming someone has the cycles 
> to, you know, actually do the work. :D
> 
> On Sun, Jan 7, 2024, at 10:41 PM, Jon Haddad wrote:
>> I like the idea of the ability to execute certain commands via CQL, but I 
>> think it only makes sense for the nodetool commands that cause an action to 
>> take place, such as compact or repair.  We already have virtual tables, I 
>> don't think we need another layer to run informational queries.  I see 
>> little value in having the following (I'm using exec here for simplicity):
>> 
>> cqlsh> exec tpstats
>> 
>> which returns a string in addition to:
>> 
>> cqlsh> select * from system_views.thread_pools
>> 
>> which returns structured data.  
>> 
>> I'd also rather see updatable configuration virtual tables instead of
>> 
>> cqlsh> exec setcompactionthroughput 128
>> 
>> Fundamentally, I think it's better for the project if administration is 
>> fully done over CQL and we have a consistent, single way of doing things.  
>> I'm not dead set on it, I just think less is more in a lot of situations, 
>> this being one of them.  
>> 
>> Jon
>> 
>> 
>> On Wed, Jan 3, 2024 at 2:56 PM Maxim Muzafarov > > wrote:
>> Happy New Year to everyone! I'd like to thank everyone for their
>> questions, because answering them forces us to move towards the right
>> solution, and I also like the ML discussions for the time they give to
>> investigate the code :-)
>> 
>> I'm deliberately trying to limit the scope of the initial solution
>> (e.g. exclude the agent part) to keep the discussion short and clear,
>> but it's also important to have a glimpse of what we can do next once
>> we've finished with the topic.
>> 
>> My view of the Command<> is that it is an abstraction in the broader
>> sense of an operation that can be performed on the local node,
>> involving one of a few internal components. This means that updating a
>> property in the settings virtual table via an update statement, or
>> executing e.g. the setconcurrentcompactors command are just aliases of
>> the same internal command via different APIs. Another example is the
>> netstats command, which simply aggregates the MessageService metrics
>> and returns them in a human-readable format (just another way of
>> looking at key-value metric pairs). More broadly, the command input is
>> Map and String as the result (or List).
>> 
>> As Abe mentioned, Command and CommandRegistry should be largely based
>> on the nodetool command set at the beginning. We have a few options
>> for how we can initially construct command metadata during the
>> registry implementation (when moving command metadata from the
>> nodetool to the core part), so I'm planning to consult with the
>> command representations of the k8cassandra project in the way of any
>> further registry adoptions have zero problems (by writing a test

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-08 Thread Josh McKenzie
> Fundamentally, I think it's better for the project if administration is fully 
> done over CQL and we have a consistent, single way of doing things. 
Strongly agree here. With 2 caveats:
 1. Supporting backwards compat, especially for automated ops (i.e. nodetool, 
JMX, etc), is crucial. Painful, but crucial.
 2. We need something that's available for use before the node comes fully 
online; the point Jeff always brings up when we discuss moving away from JMX. 
So long as we have some kind of "out-of-band" access to nodes or accommodation 
for that, we should be good.
For context on point 2, see slack: 
https://the-asf.slack.com/archives/CK23JSY2K/p1688745128122749?thread_ts=1688662169.018449=CK23JSY2K

> I point out that JMX works before and after the native protocol is running 
> (startup, shutdown, joining, leaving), and also it's semi-common for us to 
> disable the native protocol in certain circumstances, so at the very least, 
> we'd then need to implement a totally different cql protocol interface just 
> for administration, which nobody has committed to building yet.

I think this is a solvable problem, and I think the benefits of having a 
single, elegant way of interacting with a cluster and configuring it justifies 
the investment for us as a project. Assuming someone has the cycles to, you 
know, actually do the work. :D

On Sun, Jan 7, 2024, at 10:41 PM, Jon Haddad wrote:
> I like the idea of the ability to execute certain commands via CQL, but I 
> think it only makes sense for the nodetool commands that cause an action to 
> take place, such as compact or repair.  We already have virtual tables, I 
> don't think we need another layer to run informational queries.  I see little 
> value in having the following (I'm using exec here for simplicity):
> 
> cqlsh> exec tpstats
> 
> which returns a string in addition to:
> 
> cqlsh> select * from system_views.thread_pools
> 
> which returns structured data.  
> 
> I'd also rather see updatable configuration virtual tables instead of
> 
> cqlsh> exec setcompactionthroughput 128
> 
> Fundamentally, I think it's better for the project if administration is fully 
> done over CQL and we have a consistent, single way of doing things.  I'm not 
> dead set on it, I just think less is more in a lot of situations, this being 
> one of them.  
> 
> Jon
> 
> 
> On Wed, Jan 3, 2024 at 2:56 PM Maxim Muzafarov  wrote:
>> Happy New Year to everyone! I'd like to thank everyone for their
>> questions, because answering them forces us to move towards the right
>> solution, and I also like the ML discussions for the time they give to
>> investigate the code :-)
>> 
>> I'm deliberately trying to limit the scope of the initial solution
>> (e.g. exclude the agent part) to keep the discussion short and clear,
>> but it's also important to have a glimpse of what we can do next once
>> we've finished with the topic.
>> 
>> My view of the Command<> is that it is an abstraction in the broader
>> sense of an operation that can be performed on the local node,
>> involving one of a few internal components. This means that updating a
>> property in the settings virtual table via an update statement, or
>> executing e.g. the setconcurrentcompactors command are just aliases of
>> the same internal command via different APIs. Another example is the
>> netstats command, which simply aggregates the MessageService metrics
>> and returns them in a human-readable format (just another way of
>> looking at key-value metric pairs). More broadly, the command input is
>> Map and String as the result (or List).
>> 
>> As Abe mentioned, Command and CommandRegistry should be largely based
>> on the nodetool command set at the beginning. We have a few options
>> for how we can initially construct command metadata during the
>> registry implementation (when moving command metadata from the
>> nodetool to the core part), so I'm planning to consult with the
>> command representations of the k8cassandra project in the way of any
>> further registry adoptions have zero problems (by writing a test
>> openapi registry exporter and comparing the representation results).
>> 
>> So, the MVP is the following:
>> - Command
>> - CommandRegistry
>> - CQLCommandExporter
>> - JMXCommandExporter
>> - the nodetool uses the JMXCommandExporter
>> 
>> 
>> = Answers =
>> 
>> > What do you have in mind specifically there? Do you plan on rewriting a 
>> > brand new implementation which would be partially inspired by our agent? 
>> > Or would the project integrate our agent code in-tree or as a dependency?
>> 
>> Personally, I like the state of the k8ssandra project as it is now. My
>> understanding is that the server part of a database always lags behind
>> the client and sidecar parts in terms of the jdk version and the
>> features it provides. In contrast, sidecars should always be on top of
>> the market, so if we want to make an agent part in-tree, this should
>> be carefully considered for the flexibility which 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-07 Thread Jon Haddad
I like the idea of the ability to execute certain commands via CQL, but I
think it only makes sense for the nodetool commands that cause an action to
take place, such as compact or repair.  We already have virtual tables, I
don't think we need another layer to run informational queries.  I see
little value in having the following (I'm using exec here for simplicity):

cqlsh> exec tpstats

which returns a string in addition to:

cqlsh> select * from system_views.thread_pools

which returns structured data.

I'd also rather see updatable configuration virtual tables instead of

cqlsh> exec setcompactionthroughput 128

Fundamentally, I think it's better for the project if administration is
fully done over CQL and we have a consistent, single way of doing things.
I'm not dead set on it, I just think less is more in a lot of situations,
this being one of them.

Jon


On Wed, Jan 3, 2024 at 2:56 PM Maxim Muzafarov  wrote:

> Happy New Year to everyone! I'd like to thank everyone for their
> questions, because answering them forces us to move towards the right
> solution, and I also like the ML discussions for the time they give to
> investigate the code :-)
>
> I'm deliberately trying to limit the scope of the initial solution
> (e.g. exclude the agent part) to keep the discussion short and clear,
> but it's also important to have a glimpse of what we can do next once
> we've finished with the topic.
>
> My view of the Command<> is that it is an abstraction in the broader
> sense of an operation that can be performed on the local node,
> involving one of a few internal components. This means that updating a
> property in the settings virtual table via an update statement, or
> executing e.g. the setconcurrentcompactors command are just aliases of
> the same internal command via different APIs. Another example is the
> netstats command, which simply aggregates the MessageService metrics
> and returns them in a human-readable format (just another way of
> looking at key-value metric pairs). More broadly, the command input is
> Map and String as the result (or List).
>
> As Abe mentioned, Command and CommandRegistry should be largely based
> on the nodetool command set at the beginning. We have a few options
> for how we can initially construct command metadata during the
> registry implementation (when moving command metadata from the
> nodetool to the core part), so I'm planning to consult with the
> command representations of the k8cassandra project in the way of any
> further registry adoptions have zero problems (by writing a test
> openapi registry exporter and comparing the representation results).
>
> So, the MVP is the following:
> - Command
> - CommandRegistry
> - CQLCommandExporter
> - JMXCommandExporter
> - the nodetool uses the JMXCommandExporter
>
>
> = Answers =
>
> > What do you have in mind specifically there? Do you plan on rewriting a
> brand new implementation which would be partially inspired by our agent? Or
> would the project integrate our agent code in-tree or as a dependency?
>
> Personally, I like the state of the k8ssandra project as it is now. My
> understanding is that the server part of a database always lags behind
> the client and sidecar parts in terms of the jdk version and the
> features it provides. In contrast, sidecars should always be on top of
> the market, so if we want to make an agent part in-tree, this should
> be carefully considered for the flexibility which we may lose, as we
> will not be able to change the agent part within the sidecar. The only
> closest change I can see is that we can remove the interceptor part
> once the CQL command interface is available. I suggest we move the
> agent part to phase 2 and research it. wdyt?
>
>
> > How are the results of the commands expressed to the CQL client? Since
> the command is being treated as CQL, I guess it will be rows, right? If
> yes, some of the nodetool commands output are a bit hierarchical in nature
> (e.g. cfstats, netstats etc...). How are these cases handled?
>
> I think the result of the execution should be a simple string (or set
> of strings), which by its nature matches the nodetool output. I would
> avoid building complex output or output schemas for now to simplify
> the initial changes.
>
>
> > Any changes expected at client/driver side?
>
> I'd like to keep the initial changes to a server part only, to avoid
> scope inflation. For the driver part, I have checked the ExecutionInfo
> interface provided by the java-driver, which should probably be used
> as a command execution status holder. We'd like to have a unique
> command execution id for each command that is executed on the node, so
> the ExecutionInfo should probably hold such an id. Currently it has
> the UUID getTracingId(), which is not well suited for our case and I
> think further changes and follow-ups will be required here (including
> the binary protocol, I think).
>
>
> > The term COMMAND is a bit abstract I feel (subjective)... And I 

Re: [DISCUSSION] CEP-38: CQL Management API

2024-01-03 Thread Maxim Muzafarov
Happy New Year to everyone! I'd like to thank everyone for their
questions, because answering them forces us to move towards the right
solution, and I also like the ML discussions for the time they give to
investigate the code :-)

I'm deliberately trying to limit the scope of the initial solution
(e.g. exclude the agent part) to keep the discussion short and clear,
but it's also important to have a glimpse of what we can do next once
we've finished with the topic.

My view of the Command<> is that it is an abstraction in the broader
sense of an operation that can be performed on the local node,
involving one of a few internal components. This means that updating a
property in the settings virtual table via an update statement, or
executing e.g. the setconcurrentcompactors command are just aliases of
the same internal command via different APIs. Another example is the
netstats command, which simply aggregates the MessageService metrics
and returns them in a human-readable format (just another way of
looking at key-value metric pairs). More broadly, the command input is
Map and String as the result (or List).

As Abe mentioned, Command and CommandRegistry should be largely based
on the nodetool command set at the beginning. We have a few options
for how we can initially construct command metadata during the
registry implementation (when moving command metadata from the
nodetool to the core part), so I'm planning to consult with the
command representations of the k8cassandra project in the way of any
further registry adoptions have zero problems (by writing a test
openapi registry exporter and comparing the representation results).

So, the MVP is the following:
- Command
- CommandRegistry
- CQLCommandExporter
- JMXCommandExporter
- the nodetool uses the JMXCommandExporter


= Answers =

> What do you have in mind specifically there? Do you plan on rewriting a brand 
> new implementation which would be partially inspired by our agent? Or would 
> the project integrate our agent code in-tree or as a dependency?

Personally, I like the state of the k8ssandra project as it is now. My
understanding is that the server part of a database always lags behind
the client and sidecar parts in terms of the jdk version and the
features it provides. In contrast, sidecars should always be on top of
the market, so if we want to make an agent part in-tree, this should
be carefully considered for the flexibility which we may lose, as we
will not be able to change the agent part within the sidecar. The only
closest change I can see is that we can remove the interceptor part
once the CQL command interface is available. I suggest we move the
agent part to phase 2 and research it. wdyt?


> How are the results of the commands expressed to the CQL client? Since the 
> command is being treated as CQL, I guess it will be rows, right? If yes, some 
> of the nodetool commands output are a bit hierarchical in nature (e.g. 
> cfstats, netstats etc...). How are these cases handled?

I think the result of the execution should be a simple string (or set
of strings), which by its nature matches the nodetool output. I would
avoid building complex output or output schemas for now to simplify
the initial changes.


> Any changes expected at client/driver side?

I'd like to keep the initial changes to a server part only, to avoid
scope inflation. For the driver part, I have checked the ExecutionInfo
interface provided by the java-driver, which should probably be used
as a command execution status holder. We'd like to have a unique
command execution id for each command that is executed on the node, so
the ExecutionInfo should probably hold such an id. Currently it has
the UUID getTracingId(), which is not well suited for our case and I
think further changes and follow-ups will be required here (including
the binary protocol, I think).


> The term COMMAND is a bit abstract I feel (subjective)... And I also feel the 
> settings part is overlapping with virtual tables.

I think we should keep the term Command as broad as it possible. As
long as we have a single implementation of a command, and the cost of
maintaining that piece of the source code is low, it's even better if
we have a few ways to achieve the same result using different APIs.
Personally, the only thing I would vote for is the separation of
command and metric terms (they shouldn't be mixed up).


> How are the responses of different operations expressed through the Command 
> API? If the Command Registry Adapters depend upon the command metadata for 
> invoking/validating the command, then I think there has to be a way for them 
> to interpret the response format also, right?

I'm not sure, that I've got the question correctly. Are you talking
about the command execution result schema and the validation of that
schema?

For now, I see the interface as follows, the result of the execution
is a type that can be converted to the same string as the nodetool has
for the corresponding 

Re: [DISCUSSION] CEP-38: CQL Management API

2023-12-05 Thread Abe Ratnofsky
Adding to Hari's comments:

> Any changes expected at client/driver side? While using JMX/nodetool, it is 
> clear that the command/operations are getting executed against which 
> Cassandra node. But a client can connect to multiple hosts and trigger 
> queries, then how can it ensure that commands are executed against the 
> desired Cassandra instance?

Clients are expected to set the node for the given CQL statement in cases like 
this; see docstring for example: 
https://github.com/apache/cassandra-java-driver/blob/4.x/core/src/main/java/com/datastax/oss/driver/api/core/cql/Statement.java#L124-L147

> The term COMMAND is a bit abstract I feel (subjective). Some of the examples 
> quoted are referring to updating settings (for example: EXECUTE COMMAND 
> setconcurrentcompactors WITH concurrent_compactors=5;) and some are referring 
> to operations. Updating settings and running operations are considerably 
> different things. They may have to be handled in their own way. And I also 
> feel the settings part is overlapping with virtual tables. If virtual tables 
> support writes (at least the settings virtual table), then settings can be 
> updated using the virtual table itself.

I agree with this - I actually think it would be clearer if this was referred 
to as nodetool, if the set of commands is going to be largely based on nodetool 
at the beginning. There is a lot of documentation online that references 
nodetool by name, and changing the nomenclature would make that existing 
documentation harder to understand. If a user can understand this as "nodetool, 
but better and over CQL not JMX" I think that's a clearer transition than a new 
concept of "commands".

I understand that this proposal includes more than just nodetool, but there's a 
benefit to having a tool with a name, and a web search for "cassandra commands" 
is going to have more competition and ambiguity.

Re: [DISCUSSION] CEP-38: CQL Management API

2023-12-05 Thread Venkata Hari Krishna Nukala
Hi Maxim,

I think this CEP is a great start to viewing Cassandra operations in a
different way! However, I have a few questions about it.

   - How are the results of the commands expressed to the CQL client? Since
   the command is being treated as CQL, I guess it will be rows, right? If
   yes, some of the nodetool commands output are a bit hierarchical in nature
   (e.g. cfstats, netstats etc...). How are these cases handled?
   - Any changes expected at client/driver side? While using JMX/nodetool,
   it is clear that the command/operations are getting executed against which
   Cassandra node. But a client can connect to multiple hosts and trigger
   queries, then how can it ensure that commands are executed against the
   desired Cassandra instance?
   - The term COMMAND is a bit abstract I feel (subjective). Some of the
   examples quoted are referring to updating settings (for example: EXECUTE
   COMMAND setconcurrentcompactors WITH concurrent_compactors=5;) and some are
   referring to operations. Updating settings and running operations are
   considerably different things. They may have to be handled in their own
   way. And I also feel the settings part is overlapping with virtual tables.
   If virtual tables support writes (at least the settings virtual table),
   then settings can be updated using the virtual table itself.
   - How are the responses of different operations expressed through the
   Command API? If the Command Registry Adapters depend upon the command
   metadata for invoking/validating the command, then I think there has to be
   a way for them to interpret the response format also, right?


Thanks!
Hari

On Wed, Nov 29, 2023 at 12:55 AM Alexander DEJANOVSKI 
wrote:

> Hi Maxim,
>
> I'm part of the K8ssandra team and am very happy to hear that you like our
> management API design.
> Looking at the CEP, I see that your current target design mentions the
> k8ssandra-management-api.
> What do you have in mind specifically there? Do you plan on rewriting a
> brand new implementation which would be partially inspired by our agent? Or
> would the project integrate our agent code in-tree or as a dependency?
> The latter would require of course changes to remove the CQL interceptor
> and run the queries naturally against Cassandra, along with extracting just
> the agent without the REST server.
> The former suggests that we'd have to modify our REST server to interact
> with the newly developed agent from the Cassandra project.
>
> For the metrics, we were using MCAC
>  so
> far in the K8ssandra project but the use of collectd (while very convenient
> for non kubernetes use cases) and the design issues it created led us to
> build a metrics endpoint directly into the management api which hooks on to
> the metrics registry, and is out of the box scrapable by Prometheus. As you
> mentioned, it also allows us to extend the set of metrics easily, which
> we've done recently to expose running compactions and streams as metrics.
> It is also very efficient compared to JMX based alternatives.
>
> Let us know how we can help move this CEP forward as we're willing to
> participate.
> I think it would be great to have a single api that could be used by all
> sidecars, may they be custom or officially supported by the project.
>
> Cheers,
>
> Alex
>
> Le mar. 28 nov. 2023 à 01:00, Francisco Guerrero  a
> écrit :
>
>> Hi Maxim,
>>
>> Thanks for working on this CEP!
>>
>> The CEP addresses some of the features we have been discussing for
>> Cassandra Sidecar. For example, a dedicated admin port, moving towards more
>> CQL-like interfacing with Cassandra, among others.
>>
>> I think virtual tables intended to bring the gap down between JMX and
>> CQL. However, virtual tables cannot action on node operations, so CEP-38 is
>> finally addressing that gap.
>>
>> I look forward to collaborating in this CEP, I think Cassandra and its
>> ecosystem will greatly benefit from this enhancement.
>>
>> Best,
>> - Francisco
>>
>> On 2023/11/13 18:08:54 Maxim Muzafarov wrote:
>> > Hello everyone,
>> >
>> > While we are still waiting for the review to make the settings virtual
>> > table updatable (CASSANDRA-15254), which will improve the
>> > configuration management experience for users, I'd like to take
>> > another step forward and improve the C* management approach we have as
>> > a whole. This approach aims to make all Cassandra management commands
>> > accessible via CQL, but not only that.
>> >
>> > The problem of making commands accessible via CQL presents a complex
>> > challenge, especially if we aim to minimize code duplication across
>> > the implementation of management operations for different APIs and
>> > reduce the overall maintenance burden. The proposal's scope goes
>> > beyond simply introducing a new CQL syntax. It encompasses several key
>> > objectives for C* management operations, beyond their availability
>> > through 

Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-28 Thread Alexander DEJANOVSKI
Hi Maxim,

I'm part of the K8ssandra team and am very happy to hear that you like our
management API design.
Looking at the CEP, I see that your current target design mentions the
k8ssandra-management-api.
What do you have in mind specifically there? Do you plan on rewriting a
brand new implementation which would be partially inspired by our agent? Or
would the project integrate our agent code in-tree or as a dependency?
The latter would require of course changes to remove the CQL interceptor
and run the queries naturally against Cassandra, along with extracting just
the agent without the REST server.
The former suggests that we'd have to modify our REST server to interact
with the newly developed agent from the Cassandra project.

For the metrics, we were using MCAC
 so far
in the K8ssandra project but the use of collectd (while very convenient for
non kubernetes use cases) and the design issues it created led us to build
a metrics endpoint directly into the management api which hooks on to the
metrics registry, and is out of the box scrapable by Prometheus. As you
mentioned, it also allows us to extend the set of metrics easily, which
we've done recently to expose running compactions and streams as metrics.
It is also very efficient compared to JMX based alternatives.

Let us know how we can help move this CEP forward as we're willing to
participate.
I think it would be great to have a single api that could be used by all
sidecars, may they be custom or officially supported by the project.

Cheers,

Alex

Le mar. 28 nov. 2023 à 01:00, Francisco Guerrero  a
écrit :

> Hi Maxim,
>
> Thanks for working on this CEP!
>
> The CEP addresses some of the features we have been discussing for
> Cassandra Sidecar. For example, a dedicated admin port, moving towards more
> CQL-like interfacing with Cassandra, among others.
>
> I think virtual tables intended to bring the gap down between JMX and CQL.
> However, virtual tables cannot action on node operations, so CEP-38 is
> finally addressing that gap.
>
> I look forward to collaborating in this CEP, I think Cassandra and its
> ecosystem will greatly benefit from this enhancement.
>
> Best,
> - Francisco
>
> On 2023/11/13 18:08:54 Maxim Muzafarov wrote:
> > Hello everyone,
> >
> > While we are still waiting for the review to make the settings virtual
> > table updatable (CASSANDRA-15254), which will improve the
> > configuration management experience for users, I'd like to take
> > another step forward and improve the C* management approach we have as
> > a whole. This approach aims to make all Cassandra management commands
> > accessible via CQL, but not only that.
> >
> > The problem of making commands accessible via CQL presents a complex
> > challenge, especially if we aim to minimize code duplication across
> > the implementation of management operations for different APIs and
> > reduce the overall maintenance burden. The proposal's scope goes
> > beyond simply introducing a new CQL syntax. It encompasses several key
> > objectives for C* management operations, beyond their availability
> > through CQL:
> > - Ensure consistency across all public APIs we support, including JMX
> > MBeans and the newly introduced CQL. Users should see consistent
> > command specifications and arguments, irrespective of whether they're
> > using an API or a CLI;
> > - Reduce source code maintenance costs. With this new approach, when a
> > new command is implemented, it should automatically become available
> > across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
> > the need for additional coding;
> > - Maintain backward compatibility, ensuring that existing setups and
> > workflows continue to work the same way as they do today;
> >
> > I would suggest discussing the overall design concept first, and then
> > diving into the CQL command syntax and other details once we've found
> > common ground on the community's vision. However, regardless of these
> > details, I would appreciate any feedback on the design.
> >
> > I look forward to your comments!
> >
> > Please, see the design document: CEP-38: CQL Management API
> >
> https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API
> >
>


Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-27 Thread Francisco Guerrero
Hi Maxim,

Thanks for working on this CEP! 

The CEP addresses some of the features we have been discussing for Cassandra 
Sidecar. For example, a dedicated admin port, moving towards more CQL-like 
interfacing with Cassandra, among others.

I think virtual tables intended to bring the gap down between JMX and CQL. 
However, virtual tables cannot action on node operations, so CEP-38 is finally 
addressing that gap.

I look forward to collaborating in this CEP, I think Cassandra and its 
ecosystem will greatly benefit from this enhancement.

Best,
- Francisco

On 2023/11/13 18:08:54 Maxim Muzafarov wrote:
> Hello everyone,
> 
> While we are still waiting for the review to make the settings virtual
> table updatable (CASSANDRA-15254), which will improve the
> configuration management experience for users, I'd like to take
> another step forward and improve the C* management approach we have as
> a whole. This approach aims to make all Cassandra management commands
> accessible via CQL, but not only that.
> 
> The problem of making commands accessible via CQL presents a complex
> challenge, especially if we aim to minimize code duplication across
> the implementation of management operations for different APIs and
> reduce the overall maintenance burden. The proposal's scope goes
> beyond simply introducing a new CQL syntax. It encompasses several key
> objectives for C* management operations, beyond their availability
> through CQL:
> - Ensure consistency across all public APIs we support, including JMX
> MBeans and the newly introduced CQL. Users should see consistent
> command specifications and arguments, irrespective of whether they're
> using an API or a CLI;
> - Reduce source code maintenance costs. With this new approach, when a
> new command is implemented, it should automatically become available
> across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
> the need for additional coding;
> - Maintain backward compatibility, ensuring that existing setups and
> workflows continue to work the same way as they do today;
> 
> I would suggest discussing the overall design concept first, and then
> diving into the CQL command syntax and other details once we've found
> common ground on the community's vision. However, regardless of these
> details, I would appreciate any feedback on the design.
> 
> I look forward to your comments!
> 
> Please, see the design document: CEP-38: CQL Management API
> https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API
> 


Re: [EXTERNAL] Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-23 Thread Maxim Muzafarov
Hi,

I've made a few updates to the design document to reflect the comments
we've received. However, the admin port (service port) has not been
removed from the document yet, but I think we can do that since it's
not necessary for the initial implementation. I also think that the
k8ssandra-management-api fits in perfectly into the design that we
have. A few notes on both of these topics, read on.

Admin port (Service port)

For the initial implementation, a dedicated admin port is not required
because these changes do not add any new API consumers. New CQL
management operations can be exposed through the standard data ports
using the role mode that we already have as an experimental API - the
existing tools will continue to work as they do it now: the nodetool
and Apache Sidecar via JMX client, and k8ssandra-management-api via
unix domain socket.

Personally, I like the idea of the admin port, but since it's not
needed right now, we can leave the decision for phase 2 of the CEP,
e.g. making the `cqlsh / as sysdba` command work also requires some
effort. So, I can remove the service port from the basic requirements
right now. Wdyt?

k8ssandra-management-api

I've been poking around the code and I'm really excited about the way
how it's implemented and how it uses the Unix domain socket. I really
like those sorts of things. The pluggable agent that does the metrics
export has already been mentioned as part of another CEP-1 [1], so
seeing it as a part of the Apache Cassandra itself that can be reused
by other projects makes sense to me. It should also make it easier for
C* to add new metrics as we are guaranteed API consistency and
bug-free right on the CI of the main project.

In general, the k8ssandra can benefit from the design is implemented:
1. As a first step, we can implement a new REST API test adapter for
CommandRegistry that exposes available commands and can be used to
align our work with all the management endpoints exposed by the
k8ssandra. This step doesn't require any changes on the k8ssandra
project side.
2. Since the endpoints are statically configured we can change the
hardcoded `CALL NodeOps.<>` queries to the real CQLs and remove the
query interceptors, as all of them will be available via CQL after the
design is implemented (see the diagram in the CEP-38);
3. We can switch from the statically configured endpoints to the
dynamically configured ones as all the command metadata are available
in the CommandRegistry and don't need to be written in the code beyond
the C* itself.


Thoughts?


[1] 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652224#CEP1:ApacheCassandraManagementProcess(es)-metrics

On Mon, 20 Nov 2023 at 21:10, Jake Luciani  wrote:
>
> Hi,
>
> I originally worked on the management API sidecar mentioned above.
> I'm excited to see there's renewed interest in the cql for ops concept.
>
> Though it currently uses an agent to inject the local socket for cql
> (so it can be used by older versions of Apache Cassandra),
> Similar logic like the management api project could be added directly
> to C* versions if the sidecar project has picked versions it would
> support.
>
> I think for security reasons and operability the local unix socket is
> the cleanest way to support cql as management.
> It also works very well for any sidecar to access ops (while not
> messing with JMX).
>
> Let me know if there's anything I can do to help.
>
> Jake
>
> On Mon, Nov 20, 2023 at 11:40 AM German Eichberger via dev
>  wrote:
> >
> > Hi,
> >
> > From a cloud provider perspective we expose the storage port to customers 
> > for Hybrid scenarios (e.g. fusing on-prem Cassandra with in-cloud 
> > Cassandra) so would prefer an extra port or a socket.
> > Thanks,
> > German
> >
> > ________________
> > From: Dinesh Joshi 
> > Sent: Friday, November 17, 2023 4:06 PM
> > To: dev 
> > Subject: [EXTERNAL] Re: [DISCUSSION] CEP-38: CQL Management API
> >
> > Hi Maxim,
> >
> > Thanks for putting this CEP together! This is a great start. I have gone 
> > over the CEP and there is one thing that stuck out to me.
> >
> > Among the 'basic requirements', I see you have this -
> >
> > > A dedicated admin port with the native protocol behind it,
> > > allowing only admin commands, to address the concerns when
> > > the native protocol is disabled in certain circumstances
> > > e.g. the disablebinary command is executed;
> >
> > I understand what you're achieve here. However, there are a few reasons we 
> > should probably offer some choice to our users w.r.t. using a dedicated 
> > port for management functions.
> >
> > Today Cassandra exposes several ports - 9042, 9142, 7000 and 7001. The 
> > 

Re: [EXTERNAL] Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-20 Thread Jake Luciani
Hi,

I originally worked on the management API sidecar mentioned above.
I'm excited to see there's renewed interest in the cql for ops concept.

Though it currently uses an agent to inject the local socket for cql
(so it can be used by older versions of Apache Cassandra),
Similar logic like the management api project could be added directly
to C* versions if the sidecar project has picked versions it would
support.

I think for security reasons and operability the local unix socket is
the cleanest way to support cql as management.
It also works very well for any sidecar to access ops (while not
messing with JMX).

Let me know if there's anything I can do to help.

Jake

On Mon, Nov 20, 2023 at 11:40 AM German Eichberger via dev
 wrote:
>
> Hi,
>
> From a cloud provider perspective we expose the storage port to customers for 
> Hybrid scenarios (e.g. fusing on-prem Cassandra with in-cloud Cassandra) so 
> would prefer an extra port or a socket.
> Thanks,
> German
>
> 
> From: Dinesh Joshi 
> Sent: Friday, November 17, 2023 4:06 PM
> To: dev 
> Subject: [EXTERNAL] Re: [DISCUSSION] CEP-38: CQL Management API
>
> Hi Maxim,
>
> Thanks for putting this CEP together! This is a great start. I have gone over 
> the CEP and there is one thing that stuck out to me.
>
> Among the 'basic requirements', I see you have this -
>
> > A dedicated admin port with the native protocol behind it,
> > allowing only admin commands, to address the concerns when
> > the native protocol is disabled in certain circumstances
> > e.g. the disablebinary command is executed;
>
> I understand what you're achieve here. However, there are a few reasons we 
> should probably offer some choice to our users w.r.t. using a dedicated port 
> for management functions.
>
> Today Cassandra exposes several ports - 9042, 9142, 7000 and 7001. The 
> sidecar runs on port 9043. Thats a lot of ports. I would prefer to allow 
> users to access management functionality over one of the existing ports.
>
> I realize that this would mean a subtle change in behavior for disablebinary 
> when we offer it over port 9042 and not when the operator decides to use a 
> dedicated port.
>
> More importantly, I think having this functionality exposed over the storage 
> ports may be even better. The storage ports are typically firewalled off from 
> the end users. Operators and tooling, however, usually have access to these 
> ports. This especially makes sense from a security standpoint where we'd like 
> to limit users from accessing management functionality.
>
> What do others think about this approach?
>
> thanks,
>
> Dinesh
>
> > On Nov 13, 2023, at 10:08 AM, Maxim Muzafarov  wrote:
> >
> > Hello everyone,
> >
> > While we are still waiting for the review to make the settings virtual
> > table updatable (CASSANDRA-15254), which will improve the
> > configuration management experience for users, I'd like to take
> > another step forward and improve the C* management approach we have as
> > a whole. This approach aims to make all Cassandra management commands
> > accessible via CQL, but not only that.
> >
> > The problem of making commands accessible via CQL presents a complex
> > challenge, especially if we aim to minimize code duplication across
> > the implementation of management operations for different APIs and
> > reduce the overall maintenance burden. The proposal's scope goes
> > beyond simply introducing a new CQL syntax. It encompasses several key
> > objectives for C* management operations, beyond their availability
> > through CQL:
> > - Ensure consistency across all public APIs we support, including JMX
> > MBeans and the newly introduced CQL. Users should see consistent
> > command specifications and arguments, irrespective of whether they're
> > using an API or a CLI;
> > - Reduce source code maintenance costs. With this new approach, when a
> > new command is implemented, it should automatically become available
> > across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
> > the need for additional coding;
> > - Maintain backward compatibility, ensuring that existing setups and
> > workflows continue to work the same way as they do today;
> >
> > I would suggest discussing the overall design concept first, and then
> > diving into the CQL command syntax and other details once we've found
> > common ground on the community's vision. However, regardless of these
> > details, I would appreciate any feedback on the design.
> >
> > I look forward to your comments!
> &

Re: [EXTERNAL] Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-20 Thread German Eichberger via dev
Hi,

>From a cloud provider perspective we expose the storage port to customers for 
>Hybrid scenarios (e.g. fusing on-prem Cassandra with in-cloud Cassandra) so 
>would prefer an extra port or a socket.
Thanks,
German


From: Dinesh Joshi 
Sent: Friday, November 17, 2023 4:06 PM
To: dev 
Subject: [EXTERNAL] Re: [DISCUSSION] CEP-38: CQL Management API

Hi Maxim,

Thanks for putting this CEP together! This is a great start. I have gone over 
the CEP and there is one thing that stuck out to me.

Among the 'basic requirements', I see you have this -

> A dedicated admin port with the native protocol behind it,
> allowing only admin commands, to address the concerns when
> the native protocol is disabled in certain circumstances
> e.g. the disablebinary command is executed;

I understand what you're achieve here. However, there are a few reasons we 
should probably offer some choice to our users w.r.t. using a dedicated port 
for management functions.

Today Cassandra exposes several ports - 9042, 9142, 7000 and 7001. The sidecar 
runs on port 9043. Thats a lot of ports. I would prefer to allow users to 
access management functionality over one of the existing ports.

I realize that this would mean a subtle change in behavior for disablebinary 
when we offer it over port 9042 and not when the operator decides to use a 
dedicated port.

More importantly, I think having this functionality exposed over the storage 
ports may be even better. The storage ports are typically firewalled off from 
the end users. Operators and tooling, however, usually have access to these 
ports. This especially makes sense from a security standpoint where we'd like 
to limit users from accessing management functionality.

What do others think about this approach?

thanks,

Dinesh

> On Nov 13, 2023, at 10:08 AM, Maxim Muzafarov  wrote:
>
> Hello everyone,
>
> While we are still waiting for the review to make the settings virtual
> table updatable (CASSANDRA-15254), which will improve the
> configuration management experience for users, I'd like to take
> another step forward and improve the C* management approach we have as
> a whole. This approach aims to make all Cassandra management commands
> accessible via CQL, but not only that.
>
> The problem of making commands accessible via CQL presents a complex
> challenge, especially if we aim to minimize code duplication across
> the implementation of management operations for different APIs and
> reduce the overall maintenance burden. The proposal's scope goes
> beyond simply introducing a new CQL syntax. It encompasses several key
> objectives for C* management operations, beyond their availability
> through CQL:
> - Ensure consistency across all public APIs we support, including JMX
> MBeans and the newly introduced CQL. Users should see consistent
> command specifications and arguments, irrespective of whether they're
> using an API or a CLI;
> - Reduce source code maintenance costs. With this new approach, when a
> new command is implemented, it should automatically become available
> across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
> the need for additional coding;
> - Maintain backward compatibility, ensuring that existing setups and
> workflows continue to work the same way as they do today;
>
> I would suggest discussing the overall design concept first, and then
> diving into the CQL command syntax and other details once we've found
> common ground on the community's vision. However, regardless of these
> details, I would appreciate any feedback on the design.
>
> I look forward to your comments!
>
> Please, see the design document: CEP-38: CQL Management API
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FCASSANDRA%2FCEP-38%253A%2BCQL%2BManagement%2BAPI=05%7C01%7CGerman.Eichberger%40microsoft.com%7C510fbe97b579406b389f08dbe7ca5430%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638358628430485779%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=aJcomfk5ufDIUqTFmUWzuvR18cFL8qAUS%2F3XwffqVqs%3D=0<https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API>



Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-17 Thread Dinesh Joshi
Hi Maxim,

Thanks for putting this CEP together! This is a great start. I have gone over 
the CEP and there is one thing that stuck out to me.

Among the 'basic requirements', I see you have this -

> A dedicated admin port with the native protocol behind it, 
> allowing only admin commands, to address the concerns when
> the native protocol is disabled in certain circumstances 
> e.g. the disablebinary command is executed;

I understand what you're achieve here. However, there are a few reasons we 
should probably offer some choice to our users w.r.t. using a dedicated port 
for management functions.

Today Cassandra exposes several ports - 9042, 9142, 7000 and 7001. The sidecar 
runs on port 9043. Thats a lot of ports. I would prefer to allow users to 
access management functionality over one of the existing ports.

I realize that this would mean a subtle change in behavior for disablebinary 
when we offer it over port 9042 and not when the operator decides to use a 
dedicated port.

More importantly, I think having this functionality exposed over the storage 
ports may be even better. The storage ports are typically firewalled off from 
the end users. Operators and tooling, however, usually have access to these 
ports. This especially makes sense from a security standpoint where we'd like 
to limit users from accessing management functionality.

What do others think about this approach?

thanks,

Dinesh

> On Nov 13, 2023, at 10:08 AM, Maxim Muzafarov  wrote:
> 
> Hello everyone,
> 
> While we are still waiting for the review to make the settings virtual
> table updatable (CASSANDRA-15254), which will improve the
> configuration management experience for users, I'd like to take
> another step forward and improve the C* management approach we have as
> a whole. This approach aims to make all Cassandra management commands
> accessible via CQL, but not only that.
> 
> The problem of making commands accessible via CQL presents a complex
> challenge, especially if we aim to minimize code duplication across
> the implementation of management operations for different APIs and
> reduce the overall maintenance burden. The proposal's scope goes
> beyond simply introducing a new CQL syntax. It encompasses several key
> objectives for C* management operations, beyond their availability
> through CQL:
> - Ensure consistency across all public APIs we support, including JMX
> MBeans and the newly introduced CQL. Users should see consistent
> command specifications and arguments, irrespective of whether they're
> using an API or a CLI;
> - Reduce source code maintenance costs. With this new approach, when a
> new command is implemented, it should automatically become available
> across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
> the need for additional coding;
> - Maintain backward compatibility, ensuring that existing setups and
> workflows continue to work the same way as they do today;
> 
> I would suggest discussing the overall design concept first, and then
> diving into the CQL command syntax and other details once we've found
> common ground on the community's vision. However, regardless of these
> details, I would appreciate any feedback on the design.
> 
> I look forward to your comments!
> 
> Please, see the design document: CEP-38: CQL Management API
> https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API



Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-15 Thread Maxim Muzafarov
Hello German,

Thanks for the links, I've seen this project before, but to be honest
I've never delved that deep into it. I'll definitely check it out for
more details, give me a few days to be in context!

As for the admin port, it's already part of the proposal, as discussed
in Slack. This port is needed not only because we are mixing the data
and control planes, but also because the native protocol can be
disabled, e.g. manually via the nodetool disablebinary command, or via
the disk_failure_policy 'stop' policy, which shuts down all
transports, leaving a node only operable via JMX which doesn't match
our goals.



On Wed, 15 Nov 2023 at 18:52, German Eichberger via dev
 wrote:
>
> Hi Maxim,
>
> We have adopted/forked the agent part of the 
> https://github.com/k8ssandra/management-api-for-apache-cassandra project 
> which aims to do similar things. I especially like how they have a local 
> database socket where a sidecar can easily access cassandra and execute cql 
> commands without the need of a service account like your example suggests.
>
> The syntax they adopted (see for instance 
> https://github.com/k8ssandra/management-api-for-apache-cassandra/blob/7cb367eac46a12947bb87486456d3f905f37628b/management-api-server/src/main/java/com/datastax/mgmtapi/resources/NodeOpsResources.java#L115)
>  looks like `CALL NodeOps.decommission(?, ?)", force, false)` which is 
> similar to your execute - just throwing this out as another example.
>
> I definitely like settling on the cql interface since that avoids having to 
> load different jmx bindings for different Cassandra versions making things 
> cleaner and more easily accessible. There is some security concern to mix 
> data and control plane so I would liek to see some way to restrict access 
> like the mgmt api does where the admin commands are only available on the 
> socket. Maybe, have a special admin port or socket?
>
> I  prefer making the agent part of the managment api become part of Cassandra 
> either through your CEP or other means but I can also see this as an adjacent 
> sub project  - let's discuss 
>
> German
>
> 
> From: Maxim Muzafarov 
> Sent: Monday, November 13, 2023 10:08 AM
> To: dev@cassandra.apache.org 
> Subject: [EXTERNAL] [DISCUSSION] CEP-38: CQL Management API
>
> Hello everyone,
>
> While we are still waiting for the review to make the settings virtual
> table updatable (CASSANDRA-15254), which will improve the
> configuration management experience for users, I'd like to take
> another step forward and improve the C* management approach we have as
> a whole. This approach aims to make all Cassandra management commands
> accessible via CQL, but not only that.
>
> The problem of making commands accessible via CQL presents a complex
> challenge, especially if we aim to minimize code duplication across
> the implementation of management operations for different APIs and
> reduce the overall maintenance burden. The proposal's scope goes
> beyond simply introducing a new CQL syntax. It encompasses several key
> objectives for C* management operations, beyond their availability
> through CQL:
> - Ensure consistency across all public APIs we support, including JMX
> MBeans and the newly introduced CQL. Users should see consistent
> command specifications and arguments, irrespective of whether they're
> using an API or a CLI;
> - Reduce source code maintenance costs. With this new approach, when a
> new command is implemented, it should automatically become available
> across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
> the need for additional coding;
> - Maintain backward compatibility, ensuring that existing setups and
> workflows continue to work the same way as they do today;
>
> I would suggest discussing the overall design concept first, and then
> diving into the CQL command syntax and other details once we've found
> common ground on the community's vision. However, regardless of these
> details, I would appreciate any feedback on the design.
>
> I look forward to your comments!
>
> Please, see the design document: CEP-38: CQL Management API
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FCASSANDRA%2FCEP-38%253A%2BCQL%2BManagement%2BAPI=05%7C01%7CGerman.Eichberger%40microsoft.com%7C62051e1eb8964889962d08dbe473d482%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638354958369996874%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=XT4LB1CopZy8qCUM6MnUfBhGFwKHmsUO%2B2AUpgv83zI%3D=0


Re: [DISCUSSION] CEP-38: CQL Management API

2023-11-15 Thread German Eichberger via dev
Hi Maxim,

We have adopted/forked the agent part of the 
https://github.com/k8ssandra/management-api-for-apache-cassandra project which 
aims to do similar things. I especially like how they have a local database 
socket where a sidecar can easily access cassandra and execute cql commands 
without the need of a service account like your example suggests.

The syntax they adopted (see for instance 
https://github.com/k8ssandra/management-api-for-apache-cassandra/blob/7cb367eac46a12947bb87486456d3f905f37628b/management-api-server/src/main/java/com/datastax/mgmtapi/resources/NodeOpsResources.java#L115)
 looks like `CALL NodeOps.decommission(?, ?)", force, false)` which is similar 
to your execute - just throwing this out as another example.

I definitely like settling on the cql interface since that avoids having to 
load different jmx bindings for different Cassandra versions making things 
cleaner and more easily accessible. There is some security concern to mix data 
and control plane so I would liek to see some way to restrict access like the 
mgmt api does where the admin commands are only available on the socket. Maybe, 
have a special admin port or socket?

I  prefer making the agent part of the managment api become part of Cassandra 
either through your CEP or other means but I can also see this as an adjacent 
sub project  - let's discuss 

German


From: Maxim Muzafarov 
Sent: Monday, November 13, 2023 10:08 AM
To: dev@cassandra.apache.org 
Subject: [EXTERNAL] [DISCUSSION] CEP-38: CQL Management API

Hello everyone,

While we are still waiting for the review to make the settings virtual
table updatable (CASSANDRA-15254), which will improve the
configuration management experience for users, I'd like to take
another step forward and improve the C* management approach we have as
a whole. This approach aims to make all Cassandra management commands
accessible via CQL, but not only that.

The problem of making commands accessible via CQL presents a complex
challenge, especially if we aim to minimize code duplication across
the implementation of management operations for different APIs and
reduce the overall maintenance burden. The proposal's scope goes
beyond simply introducing a new CQL syntax. It encompasses several key
objectives for C* management operations, beyond their availability
through CQL:
- Ensure consistency across all public APIs we support, including JMX
MBeans and the newly introduced CQL. Users should see consistent
command specifications and arguments, irrespective of whether they're
using an API or a CLI;
- Reduce source code maintenance costs. With this new approach, when a
new command is implemented, it should automatically become available
across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
the need for additional coding;
- Maintain backward compatibility, ensuring that existing setups and
workflows continue to work the same way as they do today;

I would suggest discussing the overall design concept first, and then
diving into the CQL command syntax and other details once we've found
common ground on the community's vision. However, regardless of these
details, I would appreciate any feedback on the design.

I look forward to your comments!

Please, see the design document: CEP-38: CQL Management API
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FCASSANDRA%2FCEP-38%253A%2BCQL%2BManagement%2BAPI=05%7C01%7CGerman.Eichberger%40microsoft.com%7C62051e1eb8964889962d08dbe473d482%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638354958369996874%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=XT4LB1CopZy8qCUM6MnUfBhGFwKHmsUO%2B2AUpgv83zI%3D=0<https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API>


[DISCUSSION] CEP-38: CQL Management API

2023-11-13 Thread Maxim Muzafarov
Hello everyone,

While we are still waiting for the review to make the settings virtual
table updatable (CASSANDRA-15254), which will improve the
configuration management experience for users, I'd like to take
another step forward and improve the C* management approach we have as
a whole. This approach aims to make all Cassandra management commands
accessible via CQL, but not only that.

The problem of making commands accessible via CQL presents a complex
challenge, especially if we aim to minimize code duplication across
the implementation of management operations for different APIs and
reduce the overall maintenance burden. The proposal's scope goes
beyond simply introducing a new CQL syntax. It encompasses several key
objectives for C* management operations, beyond their availability
through CQL:
- Ensure consistency across all public APIs we support, including JMX
MBeans and the newly introduced CQL. Users should see consistent
command specifications and arguments, irrespective of whether they're
using an API or a CLI;
- Reduce source code maintenance costs. With this new approach, when a
new command is implemented, it should automatically become available
across JMX MBeans, nodetool, CQL, and Cassandra Sidecar, eliminating
the need for additional coding;
- Maintain backward compatibility, ensuring that existing setups and
workflows continue to work the same way as they do today;

I would suggest discussing the overall design concept first, and then
diving into the CQL command syntax and other details once we've found
common ground on the community's vision. However, regardless of these
details, I would appreciate any feedback on the design.

I look forward to your comments!

Please, see the design document: CEP-38: CQL Management API
https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API