+1!

I’ve had trouble using Claude effectively on C*’s large codebase without a
lot of repeated “repo discovery” prompting. This Anthropic issue [1]
resonates with that experience and I think the approach would benefit C*
contributors as well:

   - Use of a top-level CLAUDE.md to document sub-directories,
   architecture, and commands
   - Add CLAUDE.md's to each large sub-directory to document context

[1] https://github.com/anthropics/claude-code/issues/403

Yuntong

On Tue, Feb 17, 2026 at 1:22 PM Jon Haddad <[email protected]> wrote:

> Yes, this works very well, and I recommend it.  Here's my CLAUDE.md from
> easy-db-lab as an example [1]
>
> Here's what I've found useful:
>
> - You can tell it to be aware of certain conventions, relationships
> between types, tools, shortcuts.
> - You can tell it to run static analysis to find issues with the code it's
> writing.  Very useful in preventing cyclomatic complexity from spiraling
> out of control.  I use detekt with Kotlin, but you can just as easily use
> PMD / Spotless.
> - You can tell it the types of testing you want.  Integration, in-jvm
> tests, unit tests, mocking.
> - Libraries it should use to solve specific problems.  For example, when
> writing tests for easy-db-lab, I lean on LocalStack since they created test
> constructs for a lot of AWS.  Saves me a ton of time.
> - Use the hierarchy to your advantage.  You can have a file at every level
> getting more specific as you drill down.
> - Claude can do the analysis and write the files to give you a good
> starting point too.
> - Telling it to use SOLID principles helps a TON, but you need to define
> that a bit more clearly or you end up with things like DestoyVPCService
> instead of a class that does all your crud on VPCs.
> - They are *great* at refactoring.  Would be a great opportunity to pay
> off some technical debt.  I've used Claude to do the refactoring projects I
> was absolutely dreading.
>
> > To me this is actually the big win and the current shape of LLM's as
> agentic coders is just a kind of forcing function for something we ought to
> have been doing for new human contributors for ages.
>
> Yes, agreed.  Test coverage, documentation and well defined architectural
> boundaries are good for people and the machines :)
>
> Imo, the bar for code quality can be higher when leaning on these tools,
> because you can spend more brain power on solving real problems.
>
> You can put skills in repo itself for common tasks that are a bit
> complicated.  I just published a marketplace [2] that has skills for C*
> tuning, diagnosing issues and data modeling.  To apply this to the C* repo,
> we could have skills to help create in-jvm dtests, and we could be fairly
> detailed about the right way to build them.  Skills are loaded on demand,
> so they don't eat up all your context.  Useful for when you have a lot of
> complicated things you want to do, but don't need them all the time.
>
> I've been committing around 10-20K LOC a week with Claude.  Takes a minute
> to put the right guardrails in place.  Once they're there, it's incredibly
> impressive.
>
> Jon
>
> [1] https://github.com/rustyrazorblade/easy-db-lab/blob/main/CLAUDE.md
> [2] https://github.com/rustyrazorblade/skills
>
> On Tue, Feb 17, 2026 at 9:09 AM Josh McKenzie <[email protected]>
> wrote:
>
>> will definitely help contributors adhere to standards
>>
>> To me this is actually the big win and the current shape of LLM's as
>> agentic coders is just a kind of forcing function for something we ought to
>> have been doing for new human contributors for ages.
>>
>> LLM's need the same kind of "fresh onboarding" context that a new
>> contributor would have to be effective in a space (ignoring MCP servers
>> with AST's, code property graphs, etc for now).
>>
>> So I'm a +1 on it from the human angle alone.
>>
>> On Mon, Feb 16, 2026, at 11:03 PM, Bernardo Botella wrote:
>>
>> Thanks for bringing this up Stefan!!
>>
>> A really interesting topic indeed.
>>
>>
>> I’ve also heard ideas around even having Claude.md type of files that
>> help LLMs understand the code base without having to do a full scan every
>> time.
>>
>> So, all and all, putting together something that we as a community think
>> that describe good practices + repository information not only for the main
>> Cassandra repository, but also for its subprojects, will definitely help
>> contributors adhere to standards and us reviewers to ensure that some steps
>> at least will have been considered.
>>
>> Things like:
>> - Repository structure. What every folder is
>> - Tests suits and how they work and run
>> - Git commits standards
>> - Specific project lint rules (like braces in new lines!)
>> - Preferred wording style for patches/documentation
>>
>> Committed to the projects, and accesible to LLMs, sound like really
>> useful context for those type of contributions (that are going to keep
>> happening regardless).
>>
>> So curious to read what others think.
>> Bernardo
>>
>> PD. Totally agree that this should change nothing of the quality bar for
>> code reviews and merged code
>>
>> > On Feb 16, 2026, at 6:27 PM, Štefan Miklošovič <[email protected]>
>> wrote:
>> >
>> > Hey,
>> >
>> > This happened recently in kernel space. (1), (2).
>> >
>> > What that is doing, as I understand it, is that you can point LLM to
>> > these resources and then it would be more capable when reviewing
>> > patches or even writing them. It is kind of a guide / context provided
>> > to AI prompt.
>> >
>> > I can imagine we would just compile something similar, merge it to the
>> > repo, then if somebody is prompting it then they would have an easier
>> > job etc etc, less error prone ... adhered to code style etc ...
>> >
>> > This might look like a controversial topic but I think we need to
>> > discuss this. The usage of AI is just more and more frequent. From
>> > Cassandra's perspective there is just this (3) but I do not think we
>> > reached any conclusions there (please correct me if I am wrong where
>> > we are at with AI generated patches).
>> >
>> > This is becoming an elephant in the room, I am noticing that some
>> > patches for Cassandra were prompted by AI completely. I think it would
>> > be way better if we make it easy for everybody contributing like that.
>> >
>> > This does not mean that we, as committers, would believe what AI
>> > generated blindlessly. Not at all. It would still need to go over the
>> > formal review as anything else. But acting like this is not happening
>> > and people are just not going to use AI when trying to contribute is
>> > not right. We should embrace it in some form ...
>> >
>> > 1) https://github.com/masoncl/review-prompts
>> > 2)
>> https://lore.kernel.org/lkml/[email protected]/
>> > 3) https://lists.apache.org/thread/j90jn83oz9gy88g08yzv3rgyy0vdqrv7
>>
>>
>>
>>

Reply via email to