Re: Testing and jira tickets

2017-04-12 Thread Michael Shuler
On 04/12/2017 12:10 AM, mck wrote: > > On 10 March 2017 at 05:51, Jason Brown wrote: >> A nice convention we've stumbled into wrt to patches submitted via Jira is >> to post the results of unit test and dtest runs to the ticket (to show the >> patch doesn't break things).

Re: Testing and jira tickets

2017-04-11 Thread mck
On 10 March 2017 at 05:51, Jason Brown wrote: > A nice convention we've stumbled into wrt to patches submitted via Jira is > to post the results of unit test and dtest runs to the ticket (to show the > patch doesn't break things). > [snip] > As an example, should

Re: Testing and jira tickets

2017-03-16 Thread Stefan Podkowinski
Yes, failed test results need to be looked at by someone. But this is already the case and won't change no matter if we run tests for each patch and branch, or just once a day for a single dev branch. Having to figure out which commit exactly causes the regression would take some additional

Re: Testing and jira tickets

2017-03-10 Thread Josh McKenzie
> > I think we'd be able to figure out the one of them causing a regression > on the day after. That sounds great in theory. In practice, that doesn't happen unless one person steps up and makes themselves accountable for it. For reference, take a look at:

Re: Testing and jira tickets

2017-03-10 Thread Stefan Podkowinski
If I remember correctly, the requirement of providing test results along with each patch was because of tick-tock, where the goal was to have stable release branches at all times. Without CI for testing each individual commit on all branches, this just won't work anymore. But would that really be

Re: Testing and jira tickets

2017-03-09 Thread Jason Brown
To Ariel's point, I don't think we can expect all contributors to run all utesss/dtests, especially when the patch spans multiple branches. On that front, I, like Ariel and many others, typically create our own branch of the patch and have executed the tests. I think this is a reasonable system,

Re: Testing and jira tickets

2017-03-09 Thread Ariel Weisberg
Hi, Before this change I had already been queuing the jobs myself as a reviewer. It also happens to be that many reviewers are committers. I wouldn't ask contributors to run the dtests/utests for any purpose other then so that they know the submission is done. Even if they did and they pass it

Re: Testing and jira tickets

2017-03-09 Thread Jonathan Haddad
No problem, I'll start a new thread. On Thu, Mar 9, 2017 at 11:48 AM Jason Brown wrote: > Jon and Brandon, > > I'd actually like to narrow the discussion, and keep it focused to my > original topic. Those are two excellent topics that should be addressed, > and the

Re: Testing and jira tickets

2017-03-09 Thread Jason Brown
Jon and Brandon, I'd actually like to narrow the discussion, and keep it focused to my original topic. Those are two excellent topics that should be addressed, and the solution(s) might be the same or similar as the outcome of this. However, I feel they deserve their own message thread. Thanks

Re: Testing and jira tickets

2017-03-09 Thread Jonathan Haddad
If you don't mind, I'd like to broaden the discussion a little bit to also discuss performance related patches. For instance, CASSANDRA-13271 was a performance / optimization related patch that included *zero* information on if there was any perf improvement or a regression as a result of the

Testing and jira tickets

2017-03-09 Thread Jason Brown
Hey all, A nice convention we've stumbled into wrt to patches submitted via Jira is to post the results of unit test and dtest runs to the ticket (to show the patch doesn't break things). Many contributors have used the DataStax-provided cassci system, but that's not the best long term solution.