Cassandra Community Development 12-Month Report

2024-02-22 Thread Melissa Logan
Everyone:

Below is a 12-month summary for Cassandra Community Development
(ComDev) work achieved in 2023 to help grow awareness and drive
interest and adoption of Cassandra. (1) It also includes ideas for
what we can collectively achieve in 2024.

We'll discuss in our March MWG meeting (2), but please feel free to
share your thoughts in the meantime, i.e. what works, what doesn't,
what should ComDev prioritize in 2024?

I look forward to another year of growth for the Cassandra community!

(1) 
https://docs.google.com/presentation/d/1Nbg6Vv3HgAX2TiQpoW82XJozW9U-88lfb0fX2ciT-Vs/edit#slide=id.g10ffa3d4c49_0_109
(2) https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=240883297

-- 
Melissa Logan (she/her)
Member, Apache Software Foundation
Founder, Constantia.io


Re: Table name length limit in Cassandra

2024-02-22 Thread Gaurav Agarwal
Thanks Bowen, good to know regarding performance issues. I am using a Java
client and will go ahead and check for any interoperability issues.

Regards!

On Thu, Feb 22, 2024 at 7:01 AM Bowen Song via dev 
wrote:

> Hi Gaurav,
>
> I would be less worried about performance issues than interoperability
> issues. Other tools/client libraries do not expect this, and may cause
> them to behave unexpectedly (e.g. truncating/crashing/...).
>
> If you can, try get rid of common prefix/suffix, and use abbreviations
> where possible. You shouldn't have thousands of tables (and yes, there's
> performance issue with that), so the table name length limit really
> shouldn't be an issue.
>
> Best,
> Bowen
>
> On 22/02/2024 05:47, Gaurav Agarwal wrote:
> > Hi team,
> >
> > Currently Cassandra has a table name length limit of 48 characters. If
> > I understand correctly, it was made due to the fact that filename can
> > not be more than 255 characters in windows. However, Linux supports up
> > to 4096 bytes of file name.
> >
> > Given my Cassandra nodes are on Linux systems, can I increase the
> > limit from 48 characters to 64 characters? Will there be any
> > performance issues due to increasing the limit?
> >
> > Thanks
> > Gaurav
>


Re: [EXTERNAL] Re: [Discuss] Generic Purpose Rate Limiter in Cassandra

2024-02-22 Thread Jaydeep Chovatia
Thanks, Josh. I will file an official CEP with all the details in a few
days and update this thread with that CEP number.
Thanks a lot everyone for providing valuable insights!

Jaydeep

On Thu, Feb 22, 2024 at 9:24 AM Josh McKenzie  wrote:

> Do folks think we should file an official CEP and take it there?
>
> +1 here.
>
> Synthesizing your gdoc, Caleb's work, and the feedback from this thread
> into a draft seems like a solid next step.
>
> On Wed, Feb 7, 2024, at 12:31 PM, Jaydeep Chovatia wrote:
>
> I see a lot of great ideas being discussed or proposed in the past to
> cover the most common rate limiter candidate use cases. Do folks think we
> should file an official CEP and take it there?
>
> Jaydeep
>
> On Fri, Feb 2, 2024 at 8:30 AM Caleb Rackliffe 
> wrote:
>
> I just remembered the other day that I had done a quick writeup on the
> state of compaction stress-related throttling in the project:
>
>
> https://docs.google.com/document/d/1dfTEcKVidRKC1EWu3SO1kE1iVLMdaJ9uY1WMpS3P_hs/edit?usp=sharing
>
> I'm sure most of it is old news to the people on this thread, but I
> figured I'd post it just in case :)
>
> On Tue, Jan 30, 2024 at 11:58 AM Josh McKenzie 
> wrote:
>
>
> 2.) We should make sure the links between the "known" root causes of
> cascading failures and the mechanisms we introduce to avoid them remain
> very strong.
>
> Seems to me that our historical strategy was to address individual known
> cases one-by-one rather than looking for a more holistic load-balancing and
> load-shedding solution. While the engineer in me likes the elegance of a
> broad, more-inclusive *actual SEDA-like* approach, the pragmatist in me
> wonders how far we think we are today from a stable set-point.
>
> i.e. are we facing a handful of cases where nodes can still get pushed
> over and then cascade that we can surgically address, or are we facing a
> broader lack of back-pressure that rears its head in different domains
> (client -> coordinator, coordinator -> replica, internode with other
> operations, etc) at surprising times and should be considered more
> holistically?
>
> On Tue, Jan 30, 2024, at 12:31 AM, Caleb Rackliffe wrote:
>
> I almost forgot CASSANDRA-15817, which introduced
> reject_repair_compaction_threshold, which provides a mechanism to stop
> repairs while compaction is underwater.
>
> On Jan 26, 2024, at 6:22 PM, Caleb Rackliffe 
> wrote:
>
> 
> Hey all,
>
> I'm a bit late to the discussion. I see that we've already discussed
> CASSANDRA-15013 
>  and CASSANDRA-16663
>  at least in
> passing. Having written the latter, I'd be the first to admit it's a crude
> tool, although it's been useful here and there, and provides a couple
> primitives that may be useful for future work. As Scott mentions, while it
> is configurable at runtime, it is not adaptive, although we did
> make configuration easier in CASSANDRA-17423
> . It also is
> global to the node, although we've lightly discussed some ideas around
> making it more granular. (For example, keyspace-based limiting, or limiting
> "domains" tagged by the client in requests, could be interesting.) It also
> does not deal with inter-node traffic, of course.
>
> Something we've not yet mentioned (that does address internode traffic) is
> CASSANDRA-17324 ,
> which I proposed shortly after working on the native request limiter (and
> have just not had much time to return to). The basic idea is this:
>
> When a node is struggling under the weight of a compaction backlog and
> becomes a cause of increased read latency for clients, we have two safety
> valves:
>
>
> 1.) Disabling the native protocol server, which stops the node from
> coordinating reads and writes.
> 2.) Jacking up the severity on the node, which tells the dynamic snitch to
> avoid the node for reads from other coordinators.
>
>
> These are useful, but we don’t appear to have any mechanism that would
> allow us to temporarily reject internode hint, batch, and mutation messages
> that could further delay resolution of the compaction backlog.
>
>
> Whether it's done as part of a larger framework or on its own, it still
> feels like a good idea.
>
> Thinking in terms of opportunity costs here (i.e. where we spend our
> finite engineering time to holistically improve the experience of operating
> this database) is healthy, but we probably haven't reached the point of
> diminishing returns on nodes being able to protect themselves from clients
> and from other nodes. I would just keep in mind two things:
>
> 1.) The effectiveness of rate-limiting in the system (which includes the
> database and all clients) as a whole necessarily decreases as we move from
> the application to the lowest-level database internals. Limiting correctly
> at the client will save more resources than 

Re: [Discuss] Repair inside C*

2024-02-22 Thread Paulo Motta
Apologies, I just read the previous message and missed the previous
discussion on sidecar vs main process on this thread. :-)

It does not look like a final agreement was reached about this and there
are lots of good arguments for both sides, but perhaps it would be nice to
agree on this before a CEP is proposed since this will significantly
influence the initial design?

I tend to agree with Dinesh and Scott's pragmatic stance of providing
initial support to repair scheduling on the sidecar, since this has fewer
dependencies, and progressively move what makes sense to the main process
as TCM/Accord primitives become available and mature.

On Thu, Feb 22, 2024 at 1:44 PM Paulo Motta  wrote:

> +1 to Josh's points,  The project has considered native repair scheduling
> for a long time but it was never made a reality due to the complex
> considerations involved and availability of custom implementations/tools
> like cassandra-reaper, which is a popular way of scheduling repairs in
> Cassandra.
>
> Unfortunately I did not have cycles to review this proposal, but it looks
> promising from a quick glance.
>
> One important consideration that I think we need to discuss is: where
> should repair scheduling live: in the main process or the sidecar?
>
> I think there is a lot of complexity around repair in the main process and
> we need to be extra careful about adding additional complexity on top of
> that.
>
> Perhaps this could be a good opportunity to consider the sidecar to host
> repair scheduling, since this looks to be a control plane responsibility?
> One downside is that this would not make repair scheduling available to
> users who do not use the sidecar.
>
> What do you think? It would be great to have input from sidecar
> maintainers if this is something that would make sense for that subproject.
>
> On Thu, Feb 22, 2024 at 12:33 PM Josh McKenzie 
> wrote:
>
>> Very late response from me here (basically necro'ing this thread).
>>
>> I think it'd be useful to get this condensed into a CEP that we can then
>> discuss in that format. It's clearly something we all agree we need and
>> having an implementation that works, even if it's not in your preferred
>> execution domain, is vastly better than nothing IMO.
>>
>> I don't have cycles (nor background ;) ) to do that, but it sounds like
>> you do Jaydeep given the implementation you have on a private fork + design.
>>
>> A non-exhaustive list of things that might be useful incorporating into
>> or referencing from a CEP:
>> Slack thread:
>> https://the-asf.slack.com/archives/CK23JSY2K/p1690225062383619
>> Joey's old C* ticket:
>> https://issues.apache.org/jira/browse/CASSANDRA-14346
>> Even older automatic repair scheduling:
>> https://issues.apache.org/jira/browse/CASSANDRA-10070
>> Your design gdoc:
>> https://docs.google.com/document/d/1CJWxjEi-mBABPMZ3VWJ9w5KavWfJETAGxfUpsViPcPo/edit#heading=h.r112r46toau0
>> PR with automated repair:
>> https://github.com/jaydeepkumar1984/cassandra/commit/ef6456d652c0d07cf29d88dfea03b73704814c2c
>>
>> My intuition is that we're all basically in agreement that this is
>> something the DB needs, we're all willing to bikeshed for our personal
>> preference on where it lives and how it's implemented, and at the end of
>> the day, code talks. I don't think anyone's said they'll die on the hill of
>> implementation details, so that feels like CEP time to me.
>>
>> If you were willing and able to get a CEP together for automated repair
>> based on the above material, given you've done the work and have the proof
>> points it's working at scale, I think this would be a *huge contribution*
>> to the community.
>>
>> On Thu, Aug 24, 2023, at 7:26 PM, Jaydeep Chovatia wrote:
>>
>> Is anyone going to file an official CEP for this?
>> As mentioned in this email thread, here is one of the solution's design
>> doc
>> 
>> and source code on a private Apache Cassandra patch. Could you go through
>> it and let me know what you think?
>>
>> Jaydeep
>>
>> On Wed, Aug 2, 2023 at 3:54 PM Jon Haddad 
>> wrote:
>>
>> > That said I would happily support an effort to bring repair scheduling
>> to the sidecar immediately. This has nothing blocking it, and would
>> potentially enable the sidecar to provide an official repair scheduling
>> solution that is compatible with current or even previous versions of the
>> database.
>>
>> This is something I hadn't thought much about, and is a pretty good
>> argument for using the sidecar initially.  There's a lot of deployments out
>> there and having an official repair option would be a big win.
>>
>>
>> On 2023/07/26 23:20:07 "C. Scott Andreas" wrote:
>> > I agree that it would be ideal for Cassandra to have a repair scheduler
>> in-DB.
>> >
>> > That said I would happily support an effort to bring repair scheduling
>> to the sidecar immediately. This has nothing blocking it, and would
>> 

Re: [Discuss] Repair inside C*

2024-02-22 Thread Paulo Motta
+1 to Josh's points,  The project has considered native repair scheduling
for a long time but it was never made a reality due to the complex
considerations involved and availability of custom implementations/tools
like cassandra-reaper, which is a popular way of scheduling repairs in
Cassandra.

Unfortunately I did not have cycles to review this proposal, but it looks
promising from a quick glance.

One important consideration that I think we need to discuss is: where
should repair scheduling live: in the main process or the sidecar?

I think there is a lot of complexity around repair in the main process and
we need to be extra careful about adding additional complexity on top of
that.

Perhaps this could be a good opportunity to consider the sidecar to host
repair scheduling, since this looks to be a control plane responsibility?
One downside is that this would not make repair scheduling available to
users who do not use the sidecar.

What do you think? It would be great to have input from sidecar maintainers
if this is something that would make sense for that subproject.

On Thu, Feb 22, 2024 at 12:33 PM Josh McKenzie  wrote:

> Very late response from me here (basically necro'ing this thread).
>
> I think it'd be useful to get this condensed into a CEP that we can then
> discuss in that format. It's clearly something we all agree we need and
> having an implementation that works, even if it's not in your preferred
> execution domain, is vastly better than nothing IMO.
>
> I don't have cycles (nor background ;) ) to do that, but it sounds like
> you do Jaydeep given the implementation you have on a private fork + design.
>
> A non-exhaustive list of things that might be useful incorporating into or
> referencing from a CEP:
> Slack thread:
> https://the-asf.slack.com/archives/CK23JSY2K/p1690225062383619
> Joey's old C* ticket:
> https://issues.apache.org/jira/browse/CASSANDRA-14346
> Even older automatic repair scheduling:
> https://issues.apache.org/jira/browse/CASSANDRA-10070
> Your design gdoc:
> https://docs.google.com/document/d/1CJWxjEi-mBABPMZ3VWJ9w5KavWfJETAGxfUpsViPcPo/edit#heading=h.r112r46toau0
> PR with automated repair:
> https://github.com/jaydeepkumar1984/cassandra/commit/ef6456d652c0d07cf29d88dfea03b73704814c2c
>
> My intuition is that we're all basically in agreement that this is
> something the DB needs, we're all willing to bikeshed for our personal
> preference on where it lives and how it's implemented, and at the end of
> the day, code talks. I don't think anyone's said they'll die on the hill of
> implementation details, so that feels like CEP time to me.
>
> If you were willing and able to get a CEP together for automated repair
> based on the above material, given you've done the work and have the proof
> points it's working at scale, I think this would be a *huge contribution*
> to the community.
>
> On Thu, Aug 24, 2023, at 7:26 PM, Jaydeep Chovatia wrote:
>
> Is anyone going to file an official CEP for this?
> As mentioned in this email thread, here is one of the solution's design
> doc
> 
> and source code on a private Apache Cassandra patch. Could you go through
> it and let me know what you think?
>
> Jaydeep
>
> On Wed, Aug 2, 2023 at 3:54 PM Jon Haddad 
> wrote:
>
> > That said I would happily support an effort to bring repair scheduling
> to the sidecar immediately. This has nothing blocking it, and would
> potentially enable the sidecar to provide an official repair scheduling
> solution that is compatible with current or even previous versions of the
> database.
>
> This is something I hadn't thought much about, and is a pretty good
> argument for using the sidecar initially.  There's a lot of deployments out
> there and having an official repair option would be a big win.
>
>
> On 2023/07/26 23:20:07 "C. Scott Andreas" wrote:
> > I agree that it would be ideal for Cassandra to have a repair scheduler
> in-DB.
> >
> > That said I would happily support an effort to bring repair scheduling
> to the sidecar immediately. This has nothing blocking it, and would
> potentially enable the sidecar to provide an official repair scheduling
> solution that is compatible with current or even previous versions of the
> database.
> >
> > Once TCM has landed, we’ll have much stronger primitives for repair
> orchestration in the database itself. But I don’t think that should block
> progress on a repair scheduling solution in the sidecar, and there is
> nothing that would prevent someone from continuing to use a sidecar-based
> solution in perpetuity if they preferred.
> >
> > - Scott
> >
> > > On Jul 26, 2023, at 3:25 PM, Jon Haddad 
> wrote:
> > >
> > > I'm 100% in favor of repair being part of the core DB, not the
> sidecar.  The current (and past) state of things where running the DB
> correctly *requires* running a separate process (either community
> maintained or 

Re: [Discuss] Repair inside C*

2024-02-22 Thread Josh McKenzie
Very late response from me here (basically necro'ing this thread).

I think it'd be useful to get this condensed into a CEP that we can then 
discuss in that format. It's clearly something we all agree we need and having 
an implementation that works, even if it's not in your preferred execution 
domain, is vastly better than nothing IMO.

I don't have cycles (nor background ;) ) to do that, but it sounds like you do 
Jaydeep given the implementation you have on a private fork + design.

A non-exhaustive list of things that might be useful incorporating into or 
referencing from a CEP:
Slack thread: https://the-asf.slack.com/archives/CK23JSY2K/p1690225062383619
Joey's old C* ticket: https://issues.apache.org/jira/browse/CASSANDRA-14346
Even older automatic repair scheduling: 
https://issues.apache.org/jira/browse/CASSANDRA-10070
Your design gdoc: 
https://docs.google.com/document/d/1CJWxjEi-mBABPMZ3VWJ9w5KavWfJETAGxfUpsViPcPo/edit#heading=h.r112r46toau0
PR with automated repair: 
https://github.com/jaydeepkumar1984/cassandra/commit/ef6456d652c0d07cf29d88dfea03b73704814c2c

My intuition is that we're all basically in agreement that this is something 
the DB needs, we're all willing to bikeshed for our personal preference on 
where it lives and how it's implemented, and at the end of the day, code talks. 
I don't think anyone's said they'll die on the hill of implementation details, 
so that feels like CEP time to me.

If you were willing and able to get a CEP together for automated repair based 
on the above material, given you've done the work and have the proof points 
it's working at scale, I think this would be a *huge contribution* to the 
community.

On Thu, Aug 24, 2023, at 7:26 PM, Jaydeep Chovatia wrote:
> Is anyone going to file an official CEP for this?
> As mentioned in this email thread, here is one of the solution's design doc 
> 
>  and source code on a private Apache Cassandra patch. Could you go through it 
> and let me know what you think?
> 
> Jaydeep
> 
> On Wed, Aug 2, 2023 at 3:54 PM Jon Haddad  wrote:
>> > That said I would happily support an effort to bring repair scheduling to 
>> > the sidecar immediately. This has nothing blocking it, and would 
>> > potentially enable the sidecar to provide an official repair scheduling 
>> > solution that is compatible with current or even previous versions of the 
>> > database.
>> 
>> This is something I hadn't thought much about, and is a pretty good argument 
>> for using the sidecar initially.  There's a lot of deployments out there and 
>> having an official repair option would be a big win. 
>> 
>> 
>> On 2023/07/26 23:20:07 "C. Scott Andreas" wrote:
>> > I agree that it would be ideal for Cassandra to have a repair scheduler 
>> > in-DB.
>> >
>> > That said I would happily support an effort to bring repair scheduling to 
>> > the sidecar immediately. This has nothing blocking it, and would 
>> > potentially enable the sidecar to provide an official repair scheduling 
>> > solution that is compatible with current or even previous versions of the 
>> > database.
>> >
>> > Once TCM has landed, we’ll have much stronger primitives for repair 
>> > orchestration in the database itself. But I don’t think that should block 
>> > progress on a repair scheduling solution in the sidecar, and there is 
>> > nothing that would prevent someone from continuing to use a sidecar-based 
>> > solution in perpetuity if they preferred.
>> >
>> > - Scott
>> >
>> > > On Jul 26, 2023, at 3:25 PM, Jon Haddad  
>> > > wrote:
>> > >
>> > > I'm 100% in favor of repair being part of the core DB, not the sidecar. 
>> > >  The current (and past) state of things where running the DB correctly 
>> > > *requires* running a separate process (either community maintained or 
>> > > official C* sidecar) is incredibly painful for folks.  The idea that 
>> > > your data integrity needs to be opt-in has never made sense to me from 
>> > > the perspective of either the product or the end user.
>> > >
>> > > I've worked with way too many teams that have either configured this 
>> > > incorrectly or not at all. 
>> > >
>> > > Ideally Cassandra would ship with repair built in and on by default.  
>> > > Power users can disable if they want to continue to maintain their own 
>> > > repair tooling for some reason.
>> > >
>> > > Jon
>> > >
>> > >> On 2023/07/24 20:44:14 German Eichberger via dev wrote:
>> > >> All,
>> > >> We had a brief discussion in [2] about the Uber article [1] where they 
>> > >> talk about having integrated repair into Cassandra and how great that 
>> > >> is. I expressed my disappointment that they didn't work with the 
>> > >> community on that (Uber, if you are listening time to make amends ) 
>> > >> and it turns out Joey already had the idea and wrote the code [3] - so 
>> > >> I wanted to start a discussion to gauge interest and maybe how to 
>> 

Re: [EXTERNAL] Re: [Discuss] Generic Purpose Rate Limiter in Cassandra

2024-02-22 Thread Josh McKenzie
> Do folks think we should file an official CEP and take it there?
+1 here.

Synthesizing your gdoc, Caleb's work, and the feedback from this thread into a 
draft seems like a solid next step.

On Wed, Feb 7, 2024, at 12:31 PM, Jaydeep Chovatia wrote:
> I see a lot of great ideas being discussed or proposed in the past to cover 
> the most common rate limiter candidate use cases. Do folks think we should 
> file an official CEP and take it there?
> 
> Jaydeep
> 
> On Fri, Feb 2, 2024 at 8:30 AM Caleb Rackliffe  
> wrote:
>> I just remembered the other day that I had done a quick writeup on the state 
>> of compaction stress-related throttling in the project:
>> 
>> https://docs.google.com/document/d/1dfTEcKVidRKC1EWu3SO1kE1iVLMdaJ9uY1WMpS3P_hs/edit?usp=sharing
>> 
>> I'm sure most of it is old news to the people on this thread, but I figured 
>> I'd post it just in case :)
>> 
>> On Tue, Jan 30, 2024 at 11:58 AM Josh McKenzie  wrote:
>>> __
 2.) We should make sure the links between the "known" root causes of 
 cascading failures and the mechanisms we introduce to avoid them remain 
 very strong.
>>> Seems to me that our historical strategy was to address individual known 
>>> cases one-by-one rather than looking for a more holistic load-balancing and 
>>> load-shedding solution. While the engineer in me likes the elegance of a 
>>> broad, more-inclusive *actual SEDA-like* approach, the pragmatist in me 
>>> wonders how far we think we are today from a stable set-point.
>>> 
>>> i.e. are we facing a handful of cases where nodes can still get pushed over 
>>> and then cascade that we can surgically address, or are we facing a broader 
>>> lack of back-pressure that rears its head in different domains (client -> 
>>> coordinator, coordinator -> replica, internode with other operations, etc) 
>>> at surprising times and should be considered more holistically?
>>> 
>>> On Tue, Jan 30, 2024, at 12:31 AM, Caleb Rackliffe wrote:
 I almost forgot CASSANDRA-15817, which introduced 
 reject_repair_compaction_threshold, which provides a mechanism to stop 
 repairs while compaction is underwater.
 
> On Jan 26, 2024, at 6:22 PM, Caleb Rackliffe  
> wrote:
> 
> Hey all,
> 
> I'm a bit late to the discussion. I see that we've already discussed 
> CASSANDRA-15013  
> and CASSANDRA-16663 
>  at least in 
> passing. Having written the latter, I'd be the first to admit it's a 
> crude tool, although it's been useful here and there, and provides a 
> couple primitives that may be useful for future work. As Scott mentions, 
> while it is configurable at runtime, it is not adaptive, although we did 
> make configuration easier in CASSANDRA-17423 
> . It also is 
> global to the node, although we've lightly discussed some ideas around 
> making it more granular. (For example, keyspace-based limiting, or 
> limiting "domains" tagged by the client in requests, could be 
> interesting.) It also does not deal with inter-node traffic, of course.
> 
> Something we've not yet mentioned (that does address internode traffic) 
> is CASSANDRA-17324 
> , which I proposed 
> shortly after working on the native request limiter (and have just not 
> had much time to return to). The basic idea is this:
> 
>> When a node is struggling under the weight of a compaction backlog and 
>> becomes a cause of increased read latency for clients, we have two 
>> safety valves:
>> 
>> 
>> 
>> 1.) Disabling the native protocol server, which stops the node from 
>> coordinating reads and writes.
>> 2.) Jacking up the severity on the node, which tells the dynamic snitch 
>> to avoid the node for reads from other coordinators.
>> 
>> 
>> These are useful, but we don’t appear to have any mechanism that would 
>> allow us to temporarily reject internode hint, batch, and mutation 
>> messages that could further delay resolution of the compaction backlog.
>> 
> 
> Whether it's done as part of a larger framework or on its own, it still 
> feels like a good idea.
> 
> Thinking in terms of opportunity costs here (i.e. where we spend our 
> finite engineering time to holistically improve the experience of 
> operating this database) is healthy, but we probably haven't reached the 
> point of diminishing returns on nodes being able to protect themselves 
> from clients and from other nodes. I would just keep in mind two things:
> 
> 1.) The effectiveness of rate-limiting in the system (which includes the 
> database and all clients) as a whole necessarily decreases as we move 
> from the application to the 

Re: Welcome Brad Schoening as Cassandra Committer

2024-02-22 Thread Melissa Logan
Awesome. Congrats, Brad!

On Thu, Feb 22, 2024 at 1:49 AM Maxim Muzafarov  wrote:

> Congratulations!
>
> On Thu, 22 Feb 2024 at 10:23, Berenguer Blasi 
> wrote:
> >
> > Congrats!
> >
> > On 22/2/24 9:57, Jacek Lewandowski wrote:
> >
> > Congrats Brad!
> >
> >
> > - - -- --- -  -
> > Jacek Lewandowski
> >
> >
> > czw., 22 lut 2024 o 01:29 Štefan Miklošovič 
> napisał(a):
> >>
> >> Congrats Brad, great work in the Python department :)
> >>
> >> On Wed, Feb 21, 2024 at 9:46 PM Josh McKenzie 
> wrote:
> >>>
> >>> The Apache Cassandra PMC is pleased to announce that Brad Schoening
> has accepted
> >>> the invitation to become a committer.
> >>>
> >>> Your work on the integrated python driver, launch script environment,
> and tests
> >>> has been a big help to many. Congratulations and welcome!
> >>>
> >>> The Apache Cassandra PMC members
>


Re: Table name length limit in Cassandra

2024-02-22 Thread Bowen Song via dev

Hi Gaurav,

I would be less worried about performance issues than interoperability 
issues. Other tools/client libraries do not expect this, and may cause 
them to behave unexpectedly (e.g. truncating/crashing/...).


If you can, try get rid of common prefix/suffix, and use abbreviations 
where possible. You shouldn't have thousands of tables (and yes, there's 
performance issue with that), so the table name length limit really 
shouldn't be an issue.


Best,
Bowen

On 22/02/2024 05:47, Gaurav Agarwal wrote:

Hi team,

Currently Cassandra has a table name length limit of 48 characters. If 
I understand correctly, it was made due to the fact that filename can 
not be more than 255 characters in windows. However, Linux supports up 
to 4096 bytes of file name.


Given my Cassandra nodes are on Linux systems, can I increase the 
limit from 48 characters to 64 characters? Will there be any 
performance issues due to increasing the limit?


Thanks
Gaurav


Re: Welcome Brad Schoening as Cassandra Committer

2024-02-22 Thread Maxim Muzafarov
Congratulations!

On Thu, 22 Feb 2024 at 10:23, Berenguer Blasi  wrote:
>
> Congrats!
>
> On 22/2/24 9:57, Jacek Lewandowski wrote:
>
> Congrats Brad!
>
>
> - - -- --- -  -
> Jacek Lewandowski
>
>
> czw., 22 lut 2024 o 01:29 Štefan Miklošovič  
> napisał(a):
>>
>> Congrats Brad, great work in the Python department :)
>>
>> On Wed, Feb 21, 2024 at 9:46 PM Josh McKenzie  wrote:
>>>
>>> The Apache Cassandra PMC is pleased to announce that Brad Schoening has 
>>> accepted
>>> the invitation to become a committer.
>>>
>>> Your work on the integrated python driver, launch script environment, and 
>>> tests
>>> has been a big help to many. Congratulations and welcome!
>>>
>>> The Apache Cassandra PMC members


Re: Welcome Brad Schoening as Cassandra Committer

2024-02-22 Thread Berenguer Blasi

Congrats!

On 22/2/24 9:57, Jacek Lewandowski wrote:

Congrats Brad!


- - -- --- -  -
Jacek Lewandowski


czw., 22 lut 2024 o 01:29 Štefan Miklošovič 
 napisał(a):


Congrats Brad, great work in the Python department :)

On Wed, Feb 21, 2024 at 9:46 PM Josh McKenzie
 wrote:

The Apache Cassandra PMC is pleased to announce that Brad
Schoening has accepted
the invitation to become a committer.

Your work on the integrated python driver, launch script
environment, and tests
has been a big help to many. Congratulations and welcome!

The Apache Cassandra PMC members


Re: Welcome Brad Schoening as Cassandra Committer

2024-02-22 Thread Jacek Lewandowski
Congrats Brad!


- - -- --- -  -
Jacek Lewandowski


czw., 22 lut 2024 o 01:29 Štefan Miklošovič 
napisał(a):

> Congrats Brad, great work in the Python department :)
>
> On Wed, Feb 21, 2024 at 9:46 PM Josh McKenzie 
> wrote:
>
>> The Apache Cassandra PMC is pleased to announce that Brad Schoening has
>> accepted
>> the invitation to become a committer.
>>
>> Your work on the integrated python driver, launch script environment, and
>> tests
>> has been a big help to many. Congratulations and welcome!
>>
>> The Apache Cassandra PMC members
>>
>