> Tests fail due to variety of reasons. Some of them fail due to
> underlying infrastructural issues. For example, getting a clean run of
> Python DTests typically involves rerunning them a couple times. Is it
> possible to do that at the test framework level i.e. in Jenkins and/or
>
Tests fail due to variety of reasons. Some of them fail due to underlying
infrastructural issues. For example, getting a clean run of Python DTests
typically involves rerunning them a couple times. Is it possible to do that at
the test framework level i.e. in Jenkins and/or CircleCI?
Dinesh
>
That’s awesome that we have that set up. I was checking out b.a.o after my
email and noticed some recent runs. I don’t mean to prescribe any specific
way of surfacing results as long as they are easily accessible to all
contributors (well documented where to find them, etc).
Progress on posting
> In my opinion/experience, this is all a direct consequence of lack of trust
> in CI caused by flakiness.
The challenge of this project's test state certainly feel like an
insurmountable challenge at times…
Having been battling away with Jenkins, because I do have ASF access and don't
Looks like there are two slack plugins for Jenkins. They trigger after
builds and if my rusty Jenkins-fu is right the trunk build can be scheduled
to run daily and then have the plugin post to slack when its done. Not an
expert and can't poke at the Jenkins instance myself so not sure what
Can someone find a circleci or jenkins bot that posts to the #cassandra-dev
channel in ASF slack once a day?
On Fri, Jan 24, 2020 at 11:23 AM Jordan West wrote:
> Keeping trunk green at all times is a great goal to strive for, I'd love to
> continue to work towards it, but in my experience its
Keeping trunk green at all times is a great goal to strive for, I'd love to
continue to work towards it, but in my experience its not easy. Flaky
tests, for the reason folks mentioned, are a real challenge. A standard we
could use while we work towards the more ambitious one, and we are pretty
>
> I also don't think it leads to the right behaviour or incentives.
The gap between when a test is authored and the point at which it's
determined to be flaky, as the difficulty with responsibility assignment
(an "unrelated" change can in some cases make a previously stable test
become flaky)
> due to oversight on a commit or a delta breaking some test the author thinks
> is unrelated to their diff but turns out to be a second-order consequence of
> their change that they didn't expect
In my opinion/experience, this is all a direct consequence of lack of trust in
CI caused by
>
> gating PRs on clean runs won’t achieve anything other than dealing with
> folks who straight up ignore the spirit of the policy and knowingly commit
> code with test breakage
I think there's some nuance here. We have a lot of suites (novnode, cdc,
etc etc) where failures show up because
As for GH for code review, I find that it works very well for nits. It’s also
great for doc changes, given how GH allows you suggest changes to files
in-place and automatically create PRs for those changes. That lowers the
barrier for those tiny contributions.
For anything relatively
> I find it only useful for nits, or for coaching-level comments that I would
> never want propagated to Jira.
Actually, I'll go one step further. GitHub encourages comments that are too
trivial, poisoning the well for third parties trying to find useful
information. If the comment wouldn't
The common factor is flaky tests, not people. You get a clean run, you commit.
Turns out, a test was flaky. This reduces trust in CI, so people commit
without looking as closely at results. Gating on clean tests doesn't help, as
you run until you're clean. Rinse and repeat. Breakages
100% agree
François and team wrote a doc on testing and gating commits
Blake wrote a doc on testing and gating commits
Every release there’s a thread on testing and gating commits
People are the common factor every time. Nobody wants to avoid merging their
patch because someone broke a test
>
> I am reacting to what I currently see
> happening in the project; tests fail as the norm and this is kinda seen as
> expected, even though it goes against the policies as I understand it.
After over half a decade seeing us all continue to struggle with this
problem, I've come around to the
On 1/23/20 3:53 PM, David Capwell wrote:
2) Nightly build email to dev@?
Nope. builds@c.a.o is where these go.
https://lists.apache.org/list.html?bui...@cassandra.apache.org
Michael
-
To unsubscribe, e-mail:
Thanks for the link. I have reached out to infra and will update this
thread as I hear back.
Looking around other Apache projects, there are work arounds so I don't
actually see this as a blocker, more limiting possible implementations.
So assuming we have a solution which enables CI builds on
On 1/23/20 2:13 PM, Mick Semb Wever wrote:
ASF policy is that patches from contributors that haven't a ICLA
filed can not have their patches automatically run through any ASF CI
system. It's up to a committer (or someone who has filed a ICLA) to
trigger the test run on the patch.
I couldn't
> > ASF policy is that patches from contributors that haven't a ICLA
> > filed can not have their patches automatically run through any ASF CI
> > system. It's up to a committer (or someone who has filed a ICLA) to
> > trigger the test run on the patch.
>
> I couldn't find this CI+ICLA policy
On Thu, Jan 23, 2020 at 9:09 AM Jeff Jirsa wrote:
> On Thu, Jan 23, 2020 at 6:18 AM Jeremiah Jordan
> wrote:
>
> > It is the reviewer and authors job to make sure CI ran and didn’t
> > introduce new failing tests, it doesn’t matter how they were ran. It is
> > just as easy to let something
> I am fine with Jenkins or CircleCI; though I feel CircleCI is more effort
> for each member.
ASF policy is that patches from contributors that haven't a ICLA filed can not
have their patches automatically run through any ASF CI system. It's up to a
committer (or someone who has filed a
>
> CircleCI can build github forked branches.
Yes it can, but we currently require each member of the community to set up
their own CircleCI in order to test Cassandra (and non-paid account will
have many tests failing). I looked into CircleCI JIRA integration and it
seems that we would need
On Thu, Jan 23, 2020 at 6:18 AM Jeremiah Jordan
wrote:
> Can’t you currently open a PR with the right commit message, have do
> review there with all comments posted back to JIRA, run CI on it and then
> merge it closing the PR? This is the basic workflow you are proposing yes?
>
>
Yes you can.
Can’t you currently open a PR with the right commit message, have do review
there with all comments posted back to JIRA, run CI on it and then merge it
closing the PR? This is the basic workflow you are proposing yes?
It is the reviewer and authors job to make sure CI ran and didn’t introduce
> My mind set is that by switching to PRs (even if all the conversations are
> in JIRA) we can setup automation which helps detect issues before merging.
CircleCI can build github forked branches. AFAIK there's no reason to open a PR.
The finer granularity of code review comments can also be
Sorry Jeremiah, I don't understand your comment, would it be possible to
elaborate more?
About the point on not forbidding as long as the review and testing needs
are met, could you define what that means to you?
There are a few questions I ask myself
"Does the current process stop code which
Doesn’t this github review workflow as described work right now? It’s just not
the “only” way people do things?
I don’t think we need to forbid other methods of contribution as long as the
review and testing needs are met.
-Jeremiah
> On Jan 22, 2020, at 6:35 PM, Yifan Cai wrote:
>
> +1
+1 nb to the PR approach for reviewing.
And thanks David for initiating the discussion. I would like to put my 2
cents in it.
IMO, reviews comments are better associated with the changes, precisely to
the line level, if they are put in the PR rather than in the JIRA comments.
Discussions
Thanks for the links Benedict!
Been reading the links and see the following points being made
*) enabling the spark process would lower the process to enter the project
*) high level discussions should be in JIRA [1]
*) not desirable to annotation JIRA and Github; should only annotate JIRA
I personally use Github PRs to discuss the changes if there is feedback on the
code. The discussion does get linked with the JIRA ticket. However, committing
is manual.
Dinesh
> On Jan 22, 2020, at 2:20 PM, David Capwell wrote:
>
> When submitting or reviewing a change in JIRA I notice that
This is brought up roughly once per year. If anything, you're a bit behind
schedule
https://lists.apache.org/thread.html/0750a01682eb36374e490385d6776669ac86ebc02efa27a87b2dbf9f%40%3Cdev.cassandra.apache.org%3E
When submitting or reviewing a change in JIRA I notice that we have three
main patterns for doing this: link branch, link diff, and link GitHub pull
request (PR); I wanted to bring up the idea of switching over to GitHub
pull requests as the norm.
Why should we do this? The main reasons I can
32 matches
Mail list logo