Re: Testing and jira tickets

2017-04-11 Thread Michael Shuler
On 04/12/2017 12:10 AM, mck wrote:
> 
> On 10 March 2017 at 05:51, Jason Brown  wrote:
>> A nice convention we've stumbled into wrt to patches submitted via Jira is
>> to post the results of unit test and dtest runs to the ticket (to show the
>> patch doesn't break things). 
>> [snip]
>> As an example, should contributors/committers run dtests and unit tests on
>> *some* machine (publicly available or otherwise), and then post those
>> results to the ticket?
> 
> 
> Yes please.
>  I'm a supporter that nothing should get committed without it passing
>  both unit and dtests.
>  That any SHA in trunk or any release branch that fails unit or dtests
>  is automatically uncommitted (reverted).
> 
> I was under the impression that the point of tick-tock was to move the
> code towards a stable master approach. And that the lesson learn that
> restricting any release to only have one patch version is: regardless of
> how good the developers and CI system is; a pretty poor way of trying to
> build a stable product. 
> 
> So from tick-tock to 4.0, I was really hoping it meant keeping all the
> stable master and CI improvements obtained throughout the tick-tock
> cycle while re-adding the discipline of ongoing patch versions to
> supported releases. (While being realistic to resources available.)
> 
> 
> Unfortunately without access to DS' cassci the best that i could do is
> this:
>   
> https://issues.apache.org/jira/browse/CASSANDRA-13307?focusedCommentId=15962001&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15962001
> 
> 
> And running dtests on ASF's Jenkins was a 30hr turn around.   
> :panda_face:
> Is there any hope here for us that don't have access to cassci?

Everything running the ASF Jenkins jobs is here:
https://git1-us-west.apache.org/repos/asf?p=cassandra-builds.git

Most of that was copied from what runs all the tests on cassci, with the
exception of all the cloud server things we ran on EC2 and OpenStack.

It should be possible with very minimal edits to the Job DSL groovy file
that sets up all the jobs:
https://git1-us-west.apache.org/repos/asf?p=cassandra-builds.git;a=blob;f=jenkins-dsl/cassandra_job_dsl_seed.groovy

..to set up a duplicate Jenkins instance, along with the same parameter
jobs to feed branches to. That Jenkins could be public or private,
depending on the organization. The only edits I know might be needed
would be slaveLabel, largeSlaveLabel, and possibly jdkLabel, but one
could also just mirror those in the Jenkins configuration.

I'd be happy to help anyone that needs advice on setting this up. The
ASF Jenkins is a quite minimal install with very few plugins, but one
would at least need the Job DSL plugin to get started:
https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin

A little trial and error would find any other plugins needed. Post build
task is used, too:
https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task

If someone is interested in splitting up dtest and running it on
multiple machines, I can look into possibly providing some code
examples, but that work was internal and had some machine specific bits,
but is not crazy complex.

The dtest buckets are created with `nosetests --collect-only` and
comparing each test to the last test runs via the Jenkins api on cassci
to split the buckets up based on test run time. Each bucket was scp'ed
to the scratch servers to run. The nosetest.xml results from each
scratch server are fetched afterwards and we use xunitmerge to combine
them, so Jenkins could parse the results in a sane fashion.

That's the basics on how we split up and run dtests in parallel. Since
there's no way to do this on basic Jenkins slaves, they need to run
serially in the ASF Jenkins setup.

-- 
Warm regards,
Michael


Re: Testing and jira tickets

2017-04-11 Thread mck

On 10 March 2017 at 05:51, Jason Brown  wrote:
> A nice convention we've stumbled into wrt to patches submitted via Jira is
> to post the results of unit test and dtest runs to the ticket (to show the
> patch doesn't break things). 
> [snip]
> As an example, should contributors/committers run dtests and unit tests on
> *some* machine (publicly available or otherwise), and then post those
> results to the ticket?


Yes please.
 I'm a supporter that nothing should get committed without it passing
 both unit and dtests.
 That any SHA in trunk or any release branch that fails unit or dtests
 is automatically uncommitted (reverted).

I was under the impression that the point of tick-tock was to move the
code towards a stable master approach. And that the lesson learn that
restricting any release to only have one patch version is: regardless of
how good the developers and CI system is; a pretty poor way of trying to
build a stable product. 

So from tick-tock to 4.0, I was really hoping it meant keeping all the
stable master and CI improvements obtained throughout the tick-tock
cycle while re-adding the discipline of ongoing patch versions to
supported releases. (While being realistic to resources available.)


Unfortunately without access to DS' cassci the best that i could do is
this:
  
https://issues.apache.org/jira/browse/CASSANDRA-13307?focusedCommentId=15962001&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15962001


And running dtests on ASF's Jenkins was a 30hr turn around.   
:panda_face:
Is there any hope here for us that don't have access to cassci?

~mck


Re: Testing and jira tickets

2017-03-16 Thread Stefan Podkowinski
Yes, failed test results need to be looked at by someone. But this is
already the case and won't change no matter if we run tests for each
patch and branch, or just once a day for a single dev branch. Having to
figure out which commit exactly causes the regression would take some
additional effort, but I don't think that would be the hardest part
dealing with failed test results. I'd be happy to discus other options,
but I'm pretty sure all of them will come with a price and we eventually
have to agree on something.


On 03/10/2017 03:43 PM, Josh McKenzie wrote:
>> I think we'd be able to figure out the one of them causing a regression
>> on the day after.
> That sounds great in theory. In practice, that doesn't happen unless one
> person steps up and makes themselves accountable for it.
>
> For reference, take a look at: https://cassci.datastax.com/view/trunk/, and
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20cassandra%20and%20resolution%20%3D%20unresolved%20and%20labels%20in%20(%27test-fail%27%2C%20%27test-failure%27%2C%20%27testall%27%2C%20%27dtest%27%2C%20%27unit-test%27%2C%20%27unittest%27)%20and%20assignee%20%3D%20null%20order%20by%20created%20ASC
>
> We're thankfully still in a place where these tickets are at least being
> created, but unless there's a body of people that are digging in to fix
> those test failures they're just going to keep growing.
>
> On Fri, Mar 10, 2017 at 5:03 AM, Stefan Podkowinski  wrote:
>
>> If I remember correctly, the requirement of providing test results along
>> with each patch was because of tick-tock, where the goal was to have
>> stable release branches at all times. Without CI for testing each
>> individual commit on all branches, this just won't work anymore. But
>> would that really be that bad? Can't we just get away with a single CI
>> run per branch and day?
>>
>> E.g. in the future we could commit to dev branches that are used to run
>> all tests automatically on Apache CI on daily basis, which is then
>> exclusively used for that. We don't have that many commits on a single
>> day, some of them rather trivial, and I think we'd be able to figure out
>> the one of them causing a regression on the day after. If all tests
>> pass, we can merge dev manually or even better automatically. If anyone
>> wants to run tests on his own CI before committing to dev, that's fine
>> too and will help analyzing any regressions if they happen, as we then
>> don't have to look at those patches (and all commits before on dev).
>>
>>
>>
>> On 09.03.2017 19:51, Jason Brown wrote:
>>> Hey all,
>>>
>>> A nice convention we've stumbled into wrt to patches submitted via Jira
>> is
>>> to post the results of unit test and dtest runs to the ticket (to show
>> the
>>> patch doesn't break things). Many contributors have used the
>>> DataStax-provided cassci system, but that's not the best long term
>>> solution. To that end, I'd like to start a conversation about what is the
>>> best way to proceed going forward, and then add it to the "How to
>>> contribute" docs.
>>>
>>> As an example, should contributors/committers run dtests and unit tests
>> on
>>> *some* machine (publicly available or otherwise), and then post those
>>> results to the ticket? This could be a link to a build system, like what
>> we
>>> have with cassci, or just  upload the output of the test run(s).
>>>
>>> I don't have any fixed notions, and am looking forward to hearing other's
>>> ideas.
>>>
>>> Thanks,
>>>
>>> -Jason
>>>
>>> p.s. a big thank you to DataStax for providing the cassci system
>>>



Re: Testing and jira tickets

2017-03-10 Thread Josh McKenzie
>
> I think we'd be able to figure out the one of them causing a regression
> on the day after.

That sounds great in theory. In practice, that doesn't happen unless one
person steps up and makes themselves accountable for it.

For reference, take a look at: https://cassci.datastax.com/view/trunk/, and
https://issues.apache.org/jira/issues/?jql=project%20%3D%20cassandra%20and%20resolution%20%3D%20unresolved%20and%20labels%20in%20(%27test-fail%27%2C%20%27test-failure%27%2C%20%27testall%27%2C%20%27dtest%27%2C%20%27unit-test%27%2C%20%27unittest%27)%20and%20assignee%20%3D%20null%20order%20by%20created%20ASC

We're thankfully still in a place where these tickets are at least being
created, but unless there's a body of people that are digging in to fix
those test failures they're just going to keep growing.

On Fri, Mar 10, 2017 at 5:03 AM, Stefan Podkowinski  wrote:

> If I remember correctly, the requirement of providing test results along
> with each patch was because of tick-tock, where the goal was to have
> stable release branches at all times. Without CI for testing each
> individual commit on all branches, this just won't work anymore. But
> would that really be that bad? Can't we just get away with a single CI
> run per branch and day?
>
> E.g. in the future we could commit to dev branches that are used to run
> all tests automatically on Apache CI on daily basis, which is then
> exclusively used for that. We don't have that many commits on a single
> day, some of them rather trivial, and I think we'd be able to figure out
> the one of them causing a regression on the day after. If all tests
> pass, we can merge dev manually or even better automatically. If anyone
> wants to run tests on his own CI before committing to dev, that's fine
> too and will help analyzing any regressions if they happen, as we then
> don't have to look at those patches (and all commits before on dev).
>
>
>
> On 09.03.2017 19:51, Jason Brown wrote:
> > Hey all,
> >
> > A nice convention we've stumbled into wrt to patches submitted via Jira
> is
> > to post the results of unit test and dtest runs to the ticket (to show
> the
> > patch doesn't break things). Many contributors have used the
> > DataStax-provided cassci system, but that's not the best long term
> > solution. To that end, I'd like to start a conversation about what is the
> > best way to proceed going forward, and then add it to the "How to
> > contribute" docs.
> >
> > As an example, should contributors/committers run dtests and unit tests
> on
> > *some* machine (publicly available or otherwise), and then post those
> > results to the ticket? This could be a link to a build system, like what
> we
> > have with cassci, or just  upload the output of the test run(s).
> >
> > I don't have any fixed notions, and am looking forward to hearing other's
> > ideas.
> >
> > Thanks,
> >
> > -Jason
> >
> > p.s. a big thank you to DataStax for providing the cassci system
> >
>


Re: Testing and jira tickets

2017-03-10 Thread Stefan Podkowinski
If I remember correctly, the requirement of providing test results along
with each patch was because of tick-tock, where the goal was to have
stable release branches at all times. Without CI for testing each
individual commit on all branches, this just won't work anymore. But
would that really be that bad? Can't we just get away with a single CI
run per branch and day?

E.g. in the future we could commit to dev branches that are used to run
all tests automatically on Apache CI on daily basis, which is then
exclusively used for that. We don't have that many commits on a single
day, some of them rather trivial, and I think we'd be able to figure out
the one of them causing a regression on the day after. If all tests
pass, we can merge dev manually or even better automatically. If anyone
wants to run tests on his own CI before committing to dev, that's fine
too and will help analyzing any regressions if they happen, as we then
don't have to look at those patches (and all commits before on dev).



On 09.03.2017 19:51, Jason Brown wrote:
> Hey all,
> 
> A nice convention we've stumbled into wrt to patches submitted via Jira is
> to post the results of unit test and dtest runs to the ticket (to show the
> patch doesn't break things). Many contributors have used the
> DataStax-provided cassci system, but that's not the best long term
> solution. To that end, I'd like to start a conversation about what is the
> best way to proceed going forward, and then add it to the "How to
> contribute" docs.
> 
> As an example, should contributors/committers run dtests and unit tests on
> *some* machine (publicly available or otherwise), and then post those
> results to the ticket? This could be a link to a build system, like what we
> have with cassci, or just  upload the output of the test run(s).
> 
> I don't have any fixed notions, and am looking forward to hearing other's
> ideas.
> 
> Thanks,
> 
> -Jason
> 
> p.s. a big thank you to DataStax for providing the cassci system
> 


Re: Testing and jira tickets

2017-03-09 Thread Jason Brown
To Ariel's point, I don't think we can expect all contributors to run all
utesss/dtests, especially when the patch spans multiple branches. On that
front, I, like Ariel and many others, typically create our own branch of
the patch and have executed the tests. I think this is a reasonable system,
if slightly burdensome, given our project and our needs/demands on it.

Whoever runs the test, is it reasonable to expect the results to become
available on the ticket? As the CI is moving to the Apache infrastructure,
that will be the final arbiter of " patch works or breaks things", but I
suspect executing the tests anywhere else will still be a very good
indicator of the viability of the patch.

Thoughts?

On Thu, Mar 9, 2017 at 12:31 PM, Ariel Weisberg  wrote:

> Hi,
>
> Before this change I had already been queuing the jobs myself as a
> reviewer. It also happens to be that many reviewers are committers. I
> wouldn't ask contributors to run the dtests/utests for any purpose other
> then so that they know the submission is done.
>
> Even if they did and they pass it doesn't matter. It only matters if it
> passes in CI. If it fails in CI but passes on their desktop it's not
> good enough so we have to run in CI anyways.
>
> If a reviewer is not a committer. Well they can ask someone else to do
> it? I know we have issues with responsiveness, but I would make myself
> available for that. It shouldn't be a big problem because if someone is
> doing a lot of reviews they should be a committer right?
>
> Regards,
> Ariel
>
> On Thu, Mar 9, 2017, at 01:51 PM, Jason Brown wrote:
> > Hey all,
> >
> > A nice convention we've stumbled into wrt to patches submitted via Jira
> > is
> > to post the results of unit test and dtest runs to the ticket (to show
> > the
> > patch doesn't break things). Many contributors have used the
> > DataStax-provided cassci system, but that's not the best long term
> > solution. To that end, I'd like to start a conversation about what is the
> > best way to proceed going forward, and then add it to the "How to
> > contribute" docs.
> >
> > As an example, should contributors/committers run dtests and unit tests
> > on
> > *some* machine (publicly available or otherwise), and then post those
> > results to the ticket? This could be a link to a build system, like what
> > we
> > have with cassci, or just  upload the output of the test run(s).
> >
> > I don't have any fixed notions, and am looking forward to hearing other's
> > ideas.
> >
> > Thanks,
> >
> > -Jason
> >
> > p.s. a big thank you to DataStax for providing the cassci system
>


Re: Testing and jira tickets

2017-03-09 Thread Ariel Weisberg
Hi,

Before this change I had already been queuing the jobs myself as a
reviewer. It also happens to be that many reviewers are committers. I
wouldn't ask contributors to run the dtests/utests for any purpose other
then so that they know the submission is done. 

Even if they did and they pass it doesn't matter. It only matters if it
passes in CI. If it fails in CI but passes on their desktop it's not
good enough so we have to run in CI anyways.

If a reviewer is not a committer. Well they can ask someone else to do
it? I know we have issues with responsiveness, but I would make myself
available for that. It shouldn't be a big problem because if someone is
doing a lot of reviews they should be a committer right?

Regards,
Ariel

On Thu, Mar 9, 2017, at 01:51 PM, Jason Brown wrote:
> Hey all,
> 
> A nice convention we've stumbled into wrt to patches submitted via Jira
> is
> to post the results of unit test and dtest runs to the ticket (to show
> the
> patch doesn't break things). Many contributors have used the
> DataStax-provided cassci system, but that's not the best long term
> solution. To that end, I'd like to start a conversation about what is the
> best way to proceed going forward, and then add it to the "How to
> contribute" docs.
> 
> As an example, should contributors/committers run dtests and unit tests
> on
> *some* machine (publicly available or otherwise), and then post those
> results to the ticket? This could be a link to a build system, like what
> we
> have with cassci, or just  upload the output of the test run(s).
> 
> I don't have any fixed notions, and am looking forward to hearing other's
> ideas.
> 
> Thanks,
> 
> -Jason
> 
> p.s. a big thank you to DataStax for providing the cassci system


Re: Testing and jira tickets

2017-03-09 Thread Jonathan Haddad
No problem, I'll start a new thread.

On Thu, Mar 9, 2017 at 11:48 AM Jason Brown  wrote:

> Jon and Brandon,
>
> I'd actually like to narrow the discussion, and keep it focused to my
> original topic. Those are two excellent topics that should be addressed,
> and the solution(s) might be the same or similar as the outcome of this.
> However, I feel they deserve their own message thread.
>
> Thanks for understanding,
>
> -Jason
>
> On Thu, Mar 9, 2017 at 11:27 AM, Brandon Williams 
> wrote:
>
> > Let me further broaden this discussion to include github branches, which
> > are often linked on tickets, and then later deleted.  This forces a
> person
> > to search through git to actually see the patch, and that process can be
> a
> > little rough (especially since we all know if you're gonna make a typo,
> > it's going to be in the commit, and it's probably going to be the ticket
> > number.)
> >
> > On Thu, Mar 9, 2017 at 1:00 PM, Jonathan Haddad 
> wrote:
> >
> > > If you don't mind, I'd like to broaden the discussion a little bit to
> > also
> > > discuss performance related patches.  For instance, CASSANDRA-13271
> was a
> > > performance / optimization related patch that included *zero*
> information
> > > on if there was any perf improvement or a regression as a result of the
> > > change, even though I've asked twice for that information.
> > >
> > > In addition to "does this thing break anything" we should be asking
> "how
> > > does this patch affect performance?" (and were the appropriate docs
> > > included, but that's another topic altogether)
> > >
> > > On Thu, Mar 9, 2017 at 10:51 AM Jason Brown 
> > wrote:
> > >
> > > > Hey all,
> > > >
> > > > A nice convention we've stumbled into wrt to patches submitted via
> Jira
> > > is
> > > > to post the results of unit test and dtest runs to the ticket (to
> show
> > > the
> > > > patch doesn't break things). Many contributors have used the
> > > > DataStax-provided cassci system, but that's not the best long term
> > > > solution. To that end, I'd like to start a conversation about what is
> > the
> > > > best way to proceed going forward, and then add it to the "How to
> > > > contribute" docs.
> > > >
> > > > As an example, should contributors/committers run dtests and unit
> tests
> > > on
> > > > *some* machine (publicly available or otherwise), and then post those
> > > > results to the ticket? This could be a link to a build system, like
> > what
> > > we
> > > > have with cassci, or just  upload the output of the test run(s).
> > > >
> > > > I don't have any fixed notions, and am looking forward to hearing
> > other's
> > > > ideas.
> > > >
> > > > Thanks,
> > > >
> > > > -Jason
> > > >
> > > > p.s. a big thank you to DataStax for providing the cassci system
> > > >
> > >
> >
>


Re: Testing and jira tickets

2017-03-09 Thread Jason Brown
Jon and Brandon,

I'd actually like to narrow the discussion, and keep it focused to my
original topic. Those are two excellent topics that should be addressed,
and the solution(s) might be the same or similar as the outcome of this.
However, I feel they deserve their own message thread.

Thanks for understanding,

-Jason

On Thu, Mar 9, 2017 at 11:27 AM, Brandon Williams  wrote:

> Let me further broaden this discussion to include github branches, which
> are often linked on tickets, and then later deleted.  This forces a person
> to search through git to actually see the patch, and that process can be a
> little rough (especially since we all know if you're gonna make a typo,
> it's going to be in the commit, and it's probably going to be the ticket
> number.)
>
> On Thu, Mar 9, 2017 at 1:00 PM, Jonathan Haddad  wrote:
>
> > If you don't mind, I'd like to broaden the discussion a little bit to
> also
> > discuss performance related patches.  For instance, CASSANDRA-13271 was a
> > performance / optimization related patch that included *zero* information
> > on if there was any perf improvement or a regression as a result of the
> > change, even though I've asked twice for that information.
> >
> > In addition to "does this thing break anything" we should be asking "how
> > does this patch affect performance?" (and were the appropriate docs
> > included, but that's another topic altogether)
> >
> > On Thu, Mar 9, 2017 at 10:51 AM Jason Brown 
> wrote:
> >
> > > Hey all,
> > >
> > > A nice convention we've stumbled into wrt to patches submitted via Jira
> > is
> > > to post the results of unit test and dtest runs to the ticket (to show
> > the
> > > patch doesn't break things). Many contributors have used the
> > > DataStax-provided cassci system, but that's not the best long term
> > > solution. To that end, I'd like to start a conversation about what is
> the
> > > best way to proceed going forward, and then add it to the "How to
> > > contribute" docs.
> > >
> > > As an example, should contributors/committers run dtests and unit tests
> > on
> > > *some* machine (publicly available or otherwise), and then post those
> > > results to the ticket? This could be a link to a build system, like
> what
> > we
> > > have with cassci, or just  upload the output of the test run(s).
> > >
> > > I don't have any fixed notions, and am looking forward to hearing
> other's
> > > ideas.
> > >
> > > Thanks,
> > >
> > > -Jason
> > >
> > > p.s. a big thank you to DataStax for providing the cassci system
> > >
> >
>


Re: Testing and jira tickets

2017-03-09 Thread Brandon Williams
Let me further broaden this discussion to include github branches, which
are often linked on tickets, and then later deleted.  This forces a person
to search through git to actually see the patch, and that process can be a
little rough (especially since we all know if you're gonna make a typo,
it's going to be in the commit, and it's probably going to be the ticket
number.)

On Thu, Mar 9, 2017 at 1:00 PM, Jonathan Haddad  wrote:

> If you don't mind, I'd like to broaden the discussion a little bit to also
> discuss performance related patches.  For instance, CASSANDRA-13271 was a
> performance / optimization related patch that included *zero* information
> on if there was any perf improvement or a regression as a result of the
> change, even though I've asked twice for that information.
>
> In addition to "does this thing break anything" we should be asking "how
> does this patch affect performance?" (and were the appropriate docs
> included, but that's another topic altogether)
>
> On Thu, Mar 9, 2017 at 10:51 AM Jason Brown  wrote:
>
> > Hey all,
> >
> > A nice convention we've stumbled into wrt to patches submitted via Jira
> is
> > to post the results of unit test and dtest runs to the ticket (to show
> the
> > patch doesn't break things). Many contributors have used the
> > DataStax-provided cassci system, but that's not the best long term
> > solution. To that end, I'd like to start a conversation about what is the
> > best way to proceed going forward, and then add it to the "How to
> > contribute" docs.
> >
> > As an example, should contributors/committers run dtests and unit tests
> on
> > *some* machine (publicly available or otherwise), and then post those
> > results to the ticket? This could be a link to a build system, like what
> we
> > have with cassci, or just  upload the output of the test run(s).
> >
> > I don't have any fixed notions, and am looking forward to hearing other's
> > ideas.
> >
> > Thanks,
> >
> > -Jason
> >
> > p.s. a big thank you to DataStax for providing the cassci system
> >
>


Re: Testing and jira tickets

2017-03-09 Thread Jonathan Haddad
If you don't mind, I'd like to broaden the discussion a little bit to also
discuss performance related patches.  For instance, CASSANDRA-13271 was a
performance / optimization related patch that included *zero* information
on if there was any perf improvement or a regression as a result of the
change, even though I've asked twice for that information.

In addition to "does this thing break anything" we should be asking "how
does this patch affect performance?" (and were the appropriate docs
included, but that's another topic altogether)

On Thu, Mar 9, 2017 at 10:51 AM Jason Brown  wrote:

> Hey all,
>
> A nice convention we've stumbled into wrt to patches submitted via Jira is
> to post the results of unit test and dtest runs to the ticket (to show the
> patch doesn't break things). Many contributors have used the
> DataStax-provided cassci system, but that's not the best long term
> solution. To that end, I'd like to start a conversation about what is the
> best way to proceed going forward, and then add it to the "How to
> contribute" docs.
>
> As an example, should contributors/committers run dtests and unit tests on
> *some* machine (publicly available or otherwise), and then post those
> results to the ticket? This could be a link to a build system, like what we
> have with cassci, or just  upload the output of the test run(s).
>
> I don't have any fixed notions, and am looking forward to hearing other's
> ideas.
>
> Thanks,
>
> -Jason
>
> p.s. a big thank you to DataStax for providing the cassci system
>


Testing and jira tickets

2017-03-09 Thread Jason Brown
Hey all,

A nice convention we've stumbled into wrt to patches submitted via Jira is
to post the results of unit test and dtest runs to the ticket (to show the
patch doesn't break things). Many contributors have used the
DataStax-provided cassci system, but that's not the best long term
solution. To that end, I'd like to start a conversation about what is the
best way to proceed going forward, and then add it to the "How to
contribute" docs.

As an example, should contributors/committers run dtests and unit tests on
*some* machine (publicly available or otherwise), and then post those
results to the ticket? This could be a link to a build system, like what we
have with cassci, or just  upload the output of the test run(s).

I don't have any fixed notions, and am looking forward to hearing other's
ideas.

Thanks,

-Jason

p.s. a big thank you to DataStax for providing the cassci system