Hi Rimas, Petr and all,

Thanks for the nice discussions, my ideas as follows :)

On Sat, Nov 12, 2011 at 10:46:48PM +0200, Rimas Kudelis wrote:
> This is thus irrelevant, cause there's nothing to modify. :)
> 
> > 2. I am not sure what is the meaning of the numbers 001, 002, 003.
> >
> >    It looks like they define the order in which we should process the
> >    test cases. If this is true, it does not look ideal:
> >
> >     + if we do another important test case, we will need to rename
> >           all less important test cases to keep the right order
> >
> >         + test cases will be checked by many people; we can't force them
> >           to do it in an exact order; the result would be that all
> >           people will test the same test case in parallel
> >
> >           Hmm, we need to encourage people to do the test cases in
> >           random order. We still should somehow prioritize the test
> >           cases.
> 
> I personally don't like current naming with ugly prefices at all, but
> it's Yifan's call, and I suppose he has a good reason for that naming
> scheme. However, I'm afraid we can't randomize testcase order by
> default. Currently, this would probably have to be done manually each
> time a relevant subgroup is updated. That would be a PITA. On the other
> hand, it's not really that bad. You can still run the tests in random
> order, but you will always see them in the specified order.

>From my understanding, I neither mean to encourage users to run cases
in a specific order. The ids were originally being there since I got
involved. I think it still makes sense to kept them there mainly
because of the "syncing" problem of test cases in different languages.

For example:

    #EN - w001 xxx

is supposed to have the same content with (but in different version of
language):

    #FR - w001 xxx
    #DE - w001 xxx
    #pt-BR - w001 xxx

These give us reasonable information showing which cases are supposed
to be "synced" to each other (they may not have exact same steps of
testing because of the diversity of language settings, but they should
test the same areas). So for current testing organization, I think
these ids are still playing their role in L10N test
branches. Otherwise, syncing of cases could be painful. Meanwhile in
Function Regression testing branch, by the fact we are now using a
single case to host all language versions of test case, it may not
make sense to keep the id any more.

> >     I suggest to split test cases into several levels by priorities:
> >
> >             P1 - highest: used for very basic tests, e.g. app can be
> >              installed; it starts; is able to load/save some test
> >              documents; so it a kind of smoketest
> >
> >         P2 - high: test very common functionality that is used by most
> >              users. e.g. able to write text, insert picture; draw
> >              elements; create table; use function in calc; create graph,
> >              run presentation
> >
> >         P3 - medium: test common functionality that is used by typical
> >              a bit experienced office user, e.g. create borders around
> >              tables; do animation between slides; modify text style;
> >              modify master slide page;
> >
> >         P4 - low: test functionality for hi-tech users, e.g. writing
> >              macros, using Calc solver, complex operations with data
> >              bases
> >
> >     I suggest to use the names:
> >
> >     p1g - <summary of a P1 global test>
> >     p1w - <summary of a P1 Writer test>
> >     p2g - <summary of a P2 global test>
> >     p2w - <summary of a P2 Writer test>
> >
> >     Then we will have all p1 test cases listed before p2 test cases.
> >
> >
> > What do you think?
> 
> Prioritizing is probably a good idea, but like I said, random order
> would require some Litmus modification. While certainly possible (and
> probably quite easy), I'm not sure that is what we indeed want.

Actually it is a great idea to have priority here, at least they are
helpful for us to define subset of test runs. For example, we can
create "smoke test runs" by select P1 only test cases when creating a
test run from a full regression branch containing all cases.

That is to say, even before we sort out how order of the test cases
could be implemented, we can always create specific test runs on
demand via the information of the priority "tags". i.e We can define
"smoke test" runs, "basic test" runs, "full regression test" run by
selecting cases, in which all the test cases are divided physically in
test runs and sorting becomes trivial. But it still makes sense if we
can do some hacking in ordering, which could save much of the test
cases selection effort and makes the system more flexible. :)

At the same time, as stated before, for L10N test case branch, we
probably still want test case id until we have some better solution
for translation syncing problem.

Best wishes,
Yifan
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: Libreoffice-qa@lists.freedesktop.org
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/

Reply via email to