On 19.3.2015 12:33, Martin Kosek wrote:
> On 03/19/2015 11:45 AM, Petr Spacek wrote:
>> On 19.3.2015 10:11, Martin Kosek wrote:
>>> On 03/19/2015 09:25 AM, Petr Spacek wrote:
>>>> I do not much to add to the process itself. After first reading it seems
>>>> pretty heavyweight but let's try it, it can be refined at any time :-)
>>> Right, but then we would need to migrate the data about test completion and
>>> on - which is more work. So it is much better to define some working now,
>>> to change it couple months later.
>>> We were already trying to invent something as much lightweight as possible,
>>> this was the minimum new fields we come for to be able to track the test
>>> coverage and plans. If you have another proposal how to track it better, I
>>> would love to hear it, really :-)
>> Sure. For me the main question is when *designing of tests* should start and
>> how it is synchronized with feature design. Is it done in parallel? Or
>> sequentially? When the feedback from test designers flows back? Isn't it too
>> Let's discuss ticket workflow like this:
>> new -> design functionality&tests -> write code&tests -> test run -> closed
>> IMHO we should have tests *designed* before we start to implement the final
>> version of the functionality. It may be too late to find out that interface
>> design is flawed (e.g. from user's point of view) when the feature is fully
>> implemented and test phase is reached.
>> Designing/writing tests early could discover things like poor interface
>> sooner, when it is still easy to change interfaces. Currently we have
>> reviews before the implementation starts but actually designing tests at the
>> same time would attract more eyes/brains to the feature design phase. We may
>> call it 'first usability review' if we wish :-)
>> In my mind, test designers should be first feature users (even virtually) so
>> the early feedback is crucial.
>> Note that this approach does not preclude experimental/quick&dirty
>> as part of the design phase but it has to be clear that prototype might (and
>> should!) be thrown away if the first idea wasn't the best one.
> Yes! This is exactly why this QE team was created - to be able to test as
> as possible, review designs with QE eyes as early as possible.
Great, in that case we can ignore the next section completely (it was meant as
>> If this is too radical:
>> Then there is the question if we actually need to separate field for QE state
>> and Test case field. Test case could behave in the same way as Bugzilla link
>> - empty field - undecided
>> - 0 (or string "not necessary" or something) - test case is deemed
>> - non-zero link - apparently, a test case exists
>> It would be more consistent with what we have for Bugzilla links.
> The metadata we come up should be able to supply at least following queries:
> - which tickets (RFEs/bugs) are covered with tests in a specific milestone,
> what are the test cases
> - who, from QE team, is working on which tickets
> - list of tickets where we want the tests and which are for grabs by QE
> I am not sure if this can be covered just with the extra QE phase and Test
Okay, it might be easier with more explicit fields as proposed.
Manage your subscription for the Freeipa-devel mailing list:
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code