> I find it only useful for nits, or for coaching-level comments that I would 
> never want propagated to Jira.

Actually, I'll go one step further. GitHub encourages comments that are too 
trivial, poisoning the well for third parties trying to find useful 
information.  If the comment wouldn't be made in Jira, it probably shouldn't be 
made. 

 

On 24/01/2020, 08:56, "Benedict Elliott Smith" <bened...@apache.org> wrote:

    The common factor is flaky tests, not people.  You get a clean run, you 
commit.  Turns out, a test was flaky.  This reduces trust in CI, so people 
commit without looking as closely at results.  Gating on clean tests doesn't 
help, as you run until you're clean.  Rinse and repeat.  Breakages accumulate.  
    
    This is what happens leading up to every release - nobody commits knowing 
there's a breakage.  We have a problem with bad tests, not bad people or 
process.
    
    FWIW, I no longer like the GitHub workflow.  I find it only useful for 
nits, or for coaching-level comments that I would never want propagated to 
Jira.  I find a strong patch submission of any size is better managed with 
human-curated Jira comments, and provides a better permanent record.  When 
skimming a discussion, Jira is more informative than GitHub.  Even with the 
GitHub UX, the context hinders rather than helps.
    
    As to propagating to Jira: has anyone here ever read them?  I haven't as 
they're impenetrable; ugly and almost entirely noise.  If anything, I would 
prefer that we discourage GitHub for review as a project, not move towards it.
    
    This is without getting into the problem of multiple branch PRs.  Until 
this is _provably_ painless, we cannot introduce a workflow that requires it 
and blocks commit on it.  Working with multiple branches is difficult enough 
already, surely?
    
    
    
    On 24/01/2020, 03:16, "Jeff Jirsa" <jji...@gmail.com> wrote:
    
        100% agree
        
        François and team wrote a doc on testing and gating commits
        Blake wrote a doc on testing and gating commits
        Every release there’s a thread on testing and gating commits
        
        People are the common factor every time. Nobody wants to avoid merging 
their patch because someone broke a test elsewhere. 
        
        We can’t block merging technically with the repo as it exists now so 
it’s always going to come down to people and peer pressure, until or unless 
someone starts reverting commits that break tests
        
        (Of course, someone could write a tool that automatically reverts new 
commits as long as tests fail....)
        
        On Jan 23, 2020, at 5:54 PM, Joshua McKenzie <jmcken...@apache.org> 
wrote:
        > 
        > 
        >> 
        >> 
        >> I am reacting to what I currently see
        >> happening in the project; tests fail as the norm and this is kinda 
seen as
        >> expected, even though it goes against the policies as I understand 
it.
        > 
        > After over half a decade seeing us all continue to struggle with this
        > problem, I've come around to the school of "apply pain" (I mean that 
as
        > light-hearted as you can take it) when there's a failure to incent 
fixing;
        > specifically in this case, the only idea I can think of is preventing 
merge
        > w/any failing tests on a PR. We go through this cycle as we approach 
each
        > major release: we have the gatekeeper of "we're not going to cut a 
release
        > with failing tests obviously", and we clean them up. After the 
release, the
        > pressure is off, we exhale, relax, and flaky test failures (and 
others)
        > start to creep back in.
        > 
        > If the status quo is the world we want to live in, that's totally 
fine and
        > no judgement intended - we can build tooling around test failure 
history
        > and known flaky tests etc to optimize engineer workflows around that
        > expectation. But what I keep seeing on threads like this (and have 
always
        > heard brought up in conversation) is that our collective *moral* 
stance is
        > that we should have green test boards and not merge code if it 
introduces
        > failing tests.
        > 
        > Not looking to prescribe or recommend anything, just hoping that
        > observation above might be of interest or value to the conversation.
        > 
        >> On Thu, Jan 23, 2020 at 4:17 PM Michael Shuler 
<mich...@pbandjelly.org>
        >> wrote:
        >> 
        >>> On 1/23/20 3:53 PM, David Capwell wrote:
        >>> 
        >>> 2) Nightly build email to dev@?
        >> 
        >> Nope. builds@c.a.o is where these go.
        >> https://lists.apache.org/list.html?bui...@cassandra.apache.org
        >> 
        >> Michael
        >> 
        >> ---------------------------------------------------------------------
        >> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
        >> For additional commands, e-mail: dev-h...@cassandra.apache.org
        >> 
        >> 
        
        ---------------------------------------------------------------------
        To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
        For additional commands, e-mail: dev-h...@cassandra.apache.org
        
        
    
    
    
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
    For additional commands, e-mail: dev-h...@cassandra.apache.org
    
    



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org

Reply via email to