I would suggest that JIRA’s tagged as 4.0 blockers be created for the list once 
it is fleshed out.  Test plans and results could be posted to said JIRAs, to be 
closed once a given test passes. Any bugs found can also then be related back 
to such a ticket for tracking them as well.

-Jeremiah

> On Sep 6, 2018, at 12:27 PM, Jonathan Haddad <j...@jonhaddad.com> wrote:
> 
> I completely agree with you, Sankalp.  I didn't want to dig too deep into
> the underlying testing methodology (and I still think we shouldn't just
> yet) but if the goal is to have confidence in the release, our QA process
> needs to be comprehensive.
> 
> I believe that having focused teams for each component with a team leader
> with support from committers & contributors gives us the best shot at
> defining large scale functional tests that can be used to form both
> progress and bug reports.  (A person could / hopefully will be on more than
> one team).  Coming up with those comprehensive tests will be the jobs of
> the teams, getting frequent bidirectional feedback on the dev ML.  Bugs go
> in JIRA as per usual.
> 
> Hopefully we can continue this process after the release, giving the
> project more structure, and folding more people in over time as
> contributors and ideally committers / PMC.
> 
> Jon
> 
> 
>> On Thu, Sep 6, 2018 at 1:15 PM sankalp kohli <kohlisank...@gmail.com> wrote:
>> 
>> Thanks for starting this Jon.
>> Instead of saying "I tested streaming", we should define what all was
>> tested like was all data transferred, what happened when stream failed,
>> etc.
>> Based on talking to a few users, looks like most testing is done by doing
>> an operation or running a load and seeing if it "worked" and no errors in
>> logs.
>> 
>> Another important thing will be to fix bugs asap ahead of testing,  as
>> fixes can lead to more bugs :)
>> 
>>>> On Thu, Sep 6, 2018 at 7:52 AM Jonathan Haddad <j...@jonhaddad.com> wrote:
>>> 
>>> I was thinking along the same lines.  For this to be successful I think
>>> either weekly or bi-weekly summary reports back to the mailing list by
>> the
>>> team lead for each subsection on what's been tested and how it's been
>>> tested will help keep things moving along.
>>> 
>>> In my opinion the lead for each team should *not* be the contributor that
>>> wrote the feature, but someone who's very interested in it and can use
>> the
>>> contributor as a resource.  I think it would be difficult for the
>>> contributor to poke holes in their own work - if they could do that it
>>> would have been done already.  This should be a verification process
>> that's
>>> independent as possible from the original work.
>>> 
>>> In addition to the QA process, it would be great if we could get a docs
>>> team together.  We've got quite a bit of undocumented features and nuance
>>> still, I think hammering that out would be a good idea.  Mick brought up
>>> updating the website docs in the thread on testing different JDK's [1],
>> if
>>> we could figure that out in the process we'd be in a really great
>> position
>>> from the user perspective.
>>> 
>>> Jon
>>> 
>>> [1]
>> https://lists.apache.org/thread.html/5645178efb57939b96e73ab9c298e80ad8e76f11a563b4d250c1ae38@%3Cdev.cassandra.apache.org%3E
>>> 
>>>>> On Thu, Sep 6, 2018 at 10:35 AM Jordan West <jorda...@gmail.com> wrote:
>>>> 
>>>> Thanks for staring this thread Jon!
>>>> 
>>>>> On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad <j...@jonhaddad.com>
>>>> wrote:
>>>> 
>>>>> For 4.0, I'm thinking it would be a good idea to put together a list
>> of
>>>> the
>>>>> things that need testing and see if people are willing to help test /
>>>> break
>>>>> those things.  My goal here is to get as much coverage as possible,
>> and
>>>> let
>>>>> folks focus on really hammering on specific things rather than just
>>>> firing
>>>>> up a cluster and rubber stamping it.  If we're going to be able to
>>>>> confidently deploy 4.0 quickly after it's release we're going to
>> need a
>>>>> high attention to detail.
>>>> +1 to a more coordinated effort. I think we could use the Confluence
>> that
>>>> was set up a little bit ago since it was setup for this purpose, at
>> least
>>>> for finalized plans and results:
>>>> https://cwiki.apache.org/confluence/display/CASSANDRA.
>>>> 
>>>> 
>>>>> In addition to a signup sheet, I think providing some guidance on how
>>> to
>>>> QA
>>>>> each thing that's being tested would go a long way.  Throwing "hey
>>> please
>>>>> test sstable streaming" over the wall will only get quality feedback
>>> from
>>>>> folks that are already heavily involved in the development process.
>> It
>>>>> would be nice to bring some new faces into the project by providing a
>>>>> little guidance.
>>>> 
>>>>> We could help facilitate this even further by considering the people
>>>>> signing up to test a particular feature as a team, with seasoned
>>>> Cassandra
>>>>> veterans acting as team leads.
>>>> 
>>>> +1 to this as well. I am always a fan of folks learning about a
>>>> subsystem/project through testing. It can be challenging to get folks
>> new
>>>> to a project excited about testing first but for those that do, or for
>>>> committers who want to learn another part of the db, its a great way to
>>>> learn.
>>>> 
>>>> Another thing we can do here is make sure teams are writing about the
>>>> testing they are doing and their results. This will help share
>> knowledge
>>>> about techniques and approaches that others can then apply. This
>>> knowledge
>>>> can be shared on the mailing list, a blog post, or in JIRA.
>>>> 
>>>> Jordan
>>>> 
>>>> 
>>>>> Any thoughts?  I'm happy to take the lead on this.
>>>>> --
>>>>> Jon Haddad
>>>>> http://www.rustyrazorblade.com
>>>>> twitter: rustyrazorblade
>>> 
>>> 
>>> --
>>> Jon Haddad
>>> http://www.rustyrazorblade.com
>>> twitter: rustyrazorblade
> 
> 
> -- 
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org

Reply via email to