On 28 January 2011 06:19, Paul Poulain <[email protected]> wrote:
> Le 27/01/2011 17:49, Ian Walls a écrit :
>> I hear you.  There are lots of really great features that are in
>> limbo, waiting to be tested.  The unfortunate situation of life is
>> that we're all incredibly busy, and it's hard to find time to test
>> things.  This is compounded when testing certain features requires
>> coding or systems experience, since that limits the pool of potential
>> testers.  As a result, lots of good stuff sits, waits, and diverges.
> i'm very happy to read that we agree on the problem (congrats you've
> summarized in 3 lines what I needed 50+ to express ;-) )
>> So what can be done?  I don't think that changing the signoff
>> procedure will have the desired effect.  That'll just let more
>> unreviewed code slip in, potentially introducing bugs we won't find
>> for weeks/months/years.
> That's where we disagree, but if we agree on the problem, if we all put
> our hands in it, we will find a solution that appears to all to be the
> good one !
>> I think a better solution is make it easier to test and signoff on
>> work.  There are several components to this:
>>
>>     * Testing plans for larger developments/bugfixes
>>     * A robust testing data set made readily available
>>     * Teaching people how to test and signoff on code
>>
>> By including testing plans with developments or complex bugfixes, the
>> developer is communicating to everyone how they can prove their code
>> works.  It lays out the intention of the development (it should do x,
>> y and z), and a series of tests to show how to get x, y and z without
>> losing a, b and c.
>>
>> Combine this testing plan (written in language librarians can
>> understand, not coder jargon) with the necessary data set to do the
>> testing (an SQL dump you just load into a DB), and you've lowered the
>> barrier for testing so that anyone who can afford a little time to run
>> through a series of listed procedures can answer the question "does
>> this work?".
> well, in my opinion, the one that does this testing plan must not be the
> one who wrote the feature. Because, of course, the feature has been
> tested, and the one who tested will have missed a use-case, or forgotten
> something,... ("given enough eyes, all bugs will be found").
> So we're back to the question: who can dedicate time?
> Maybe we're back to the question of someone being (collectively) paid
> just for this role? As of today, I see that no-one find time to do it !
>> The third step to this is to lay out the procedure for running through
>> the test plan in a clear, simple manner, and distribute that
>> information far and wide.  Make it something that librarians can do by
>> following a list of steps.  Lowering the threshold of experience
>> required to test things will allow us to harness the Long Tail.
>>
>> To this end, I'm throwing my hat in the ring for Quality Assurance
>> Manager for Koha 3.6.  My proposal can be found on the wiki
>> (http://wiki.koha-community.org/wiki/QA_Manager_for_3.6_Proposal), and
>> much of it is explained above.  In addition to this, I would also
>> serve as a coordinator for testing work submitted, and provide regular
>> reports to the community on the status of these developments.
>> Branches that are not receiving active testing feedback would receive
>> attention towards creating a simpler, easier to follow testing plan.
> Do you really think? I don't think so: If that were the case, then
> someone should/would have said "well seems interesting, but I need more
> information/help/..."
> We had setup sandbox for all our branches, organized a meeting where I
> were alone (only kf jumped-in to say "sorry, I can't be here"). I was so
> disappointed that ... I forgot I had planned a 2nd meeting, a few days
> later, was not here as expected, and no-one complained, so I concluded
> no-one has been attending as well. I ask and will continue to ask on
> #irc when someone passes around, but I feel like someone crying in the
> desert (frenchism suspected here)
>
> So, I fully agree that the workflow is theorically a good one. But we
> lack the ressource to make it happen. I'm very scared about that, but
> i'd like to see things going on anyway.

I think the workflow is good in practice too.

For nearly every change the workflow is

* create a branch per feature
* file a bug for it
* submit your patches to the ML
* bug is marked needs signed off
* someone takes the patch signs off it applies and works, and submits
signed off patch
* bug is marked signed off
* RM applies signed off patch to a branch based on master
* RM tests, if it is ok
* RM Merges
* bug is marked patch pushed
* Feature/bug is tested and bug updated
* bug is marked resolved

The problem is, and I don't mean to harp on about this but it is
important, is that the work that is waiting to be merged has not
followed the workflow. The branches are not one per feature. This
causes workflow to deal with then to become:

* massive bunch of changes submitted in one branch
* RM merge into qa ready branches testing they merge cleanly to master
* try to find people to test a huge pile of changes
* changes signed off
* RM tests
* Merged

Now this a legacy problem, but one we need to make sure doesn't happen
again, this is the only reason I keep repeating, One feature per
branch.

What I see is we have 2 things that need to happen, the Biblibre
patches need to get tested, and in general the testing step of the
workflow could be improved. I don't think either of these issues will
be solved by removing the sign off step.

Biblibre have acknowledged that they made a mistake in having one
massive branch with all the changes on it and I congratulate them on
that. To butcher an old saying 'To make a mistake is Human, to admit
it in public is divine'. To their credit they have also tried to cut
the work into smaller branches.  But they are still big branches with
more than one feature in them, and this I think is the major in fact
could be the only reason they have not all been fully tested and
merged yet.

This is a separate problem, lets call it, how do we deal with work
that has been submitted that doesn't follow the workflow, and work on
a solution for that.
Paul himself said that for patches that do follow the workflow (the
image patch Nicole tested and signed off) the workflow works. So lets
not change that, lets work out how to deal with patches that have been
submitted that dont follow the workflow, ie, step 1, one branch per
feature.

It's not an easy problem but it is one we are all motivated to solve.
No one. least of all me, wants to see anyones work go to waster. I
want Biblibre's code in master before 3.4, lets try to work out how we
can do it. All the while keeping vigilant and making sure all work
follows the one branch per feature/bug rule for the future. Combine
that with a better way of testing, and someone championing that, I
think we are looking good for the future.

Chris

PS please forgive typos, I was woken at 5am by my son, and haven't had
enough caffeine yet.
_______________________________________________
Koha-devel mailing list
[email protected]
http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel
website : http://www.koha-community.org/
git : http://git.koha-community.org/
bugs : http://bugs.koha-community.org/

Reply via email to