On Mon, Aug 19, 2013 at 7:47 AM, janI <j...@apache.org> wrote:
> On 19 August 2013 13:28, Rob Weir <robw...@apache.org> wrote:
>
>> On Mon, Aug 19, 2013 at 6:53 AM, Regina Henschel
>> <rb.hensc...@t-online.de> wrote:
>> > Hi Rob,
>> >
>> > Rob Weir schrieb:
>> >>
>> >> Moving this topic to its own thread.
>> >>
>> >>
>> >> It should be possible to code a very thorough set of test cases in a
>> >> spreadsheet, without using macros or anything fancy.  Just careful
>> >> reading of the ODF 1.2 specification and simple spreadsheet logic.
>> >>
>> >
>> > Reading ODF1.2 specifications will not help for all functions. For
>> example
>> > the specification of the Bessel-functions rely on the fact, that
>> interested
>> > readers will find the definition other where, using the function names.
>> [I
>> > know you will say, write an issue and make a proposal ;)]
>> >
>>
>> That's fine.  You rely on other sources that are more trusted than the
>> standard or the implementations.  Tables of special functions, for
>> example, can be found in books like Abramowicz and Steugen.  Also, the
>> NIST's newer Digital Library of Mathematical Functions, which has a
>> nice table of software applications that implement each function:
>>
>> http://dlmf.nist.gov/software/
>>
>> The main point is that we should not rely on a self-referential
>> definition of a function, automatically taking AOO behavior as the
>> correct behavior.  And I would not trust Excel in all cases, since
>> there is ample published criticism of some of their numeric
>> algorithms.  So I'd rely more on specialized software.
>>
>
> We have a good saying from the the viking times, "you need to bake your
> bread, before you can eat it".
>
> Before we get out on a very long math trail (which of course is correct),
> lets not loose the target.
>
> If we just had one or more spreadsheets, that could check that all
> functions worked like the did yesterday (regression testing), we would have
> taken a giant step towards automated testing and at the same made better
> quality and saved tester time to more special problems
>
> So in short, lets focus on testing what we have. Assuming it works today is
> to me a good assumption, and in the few cases where it turns out that it
> does not, we simply adapt the test spreadsheet. This approach would have
> the nice benefit, that we can tell our users exactly which functions have
> changed in a release.
>

As mentioned before, if we make test spreadsheets per the example I
gave earlier then you automatically record the current behavior of
AOO.   It requires no extra work.  And you also validate that this is
the correct value, which is worth doing as well, though that part does
require more work.  But in either case a large portion of the work is
designing the test so you pick the right input values to fully test
the function's behavior, e.g., error cases and other edge values.
That is the core essential work, regardless of what kind of automation
we use or don't use.  That is where the real mental work occurs.

-Rob


>
> rgds
> jan I.
>
>
>
>>
>> Regards,
>>
>> -Rob
>>
>>
>> > Kind regards
>> > Regina
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org
>> > For additional commands, e-mail: qa-h...@openoffice.apache.org
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org
>> For additional commands, e-mail: qa-h...@openoffice.apache.org
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org
For additional commands, e-mail: qa-h...@openoffice.apache.org

Reply via email to