Hi Rob,

Rob Weir schrieb:
On Mon, Aug 19, 2013 at 7:47 AM, janI <j...@apache.org> wrote:
On 19 August 2013 13:28, Rob Weir <robw...@apache.org> wrote:

On Mon, Aug 19, 2013 at 6:53 AM, Regina Henschel
<rb.hensc...@t-online.de> wrote:
Hi Rob,

Rob Weir schrieb:

Moving this topic to its own thread.


It should be possible to code a very thorough set of test cases in a
spreadsheet, without using macros or anything fancy.  Just careful
reading of the ODF 1.2 specification and simple spreadsheet logic.


Reading ODF1.2 specifications will not help for all functions. For
example
the specification of the Bessel-functions rely on the fact, that
interested
readers will find the definition other where, using the function names.
[I
know you will say, write an issue and make a proposal ;)]


That's fine.  You rely on other sources that are more trusted than the
standard or the implementations.  Tables of special functions, for
example, can be found in books like Abramowicz and Steugen.  Also, the
NIST's newer Digital Library of Mathematical Functions, which has a
nice table of software applications that implement each function:

http://dlmf.nist.gov/software/

The main point is that we should not rely on a self-referential
definition of a function, automatically taking AOO behavior as the
correct behavior.  And I would not trust Excel in all cases, since
there is ample published criticism of some of their numeric
algorithms.  So I'd rely more on specialized software.

Oh, it's difficult with my limited English. I do not miss an algorithm in ODF spec, but I want to say, that for some functions you have to look outside ODF to get a correct value.



We have a good saying from the the viking times, "you need to bake your
bread, before you can eat it".

Before we get out on a very long math trail (which of course is correct),
lets not loose the target.

If we just had one or more spreadsheets, that could check that all
functions worked like the did yesterday (regression testing), we would have
taken a giant step towards automated testing and at the same made better
quality and saved tester time to more special problems

So in short, lets focus on testing what we have. Assuming it works today is
to me a good assumption, and in the few cases where it turns out that it
does not, we simply adapt the test spreadsheet. This approach would have
the nice benefit, that we can tell our users exactly which functions have
changed in a release.


As mentioned before, if we make test spreadsheets per the example I
gave earlier then you automatically record the current behavior of
AOO.   It requires no extra work.  And you also validate that this is
the correct value, which is worth doing as well, though that part does
require more work.  But in either case a large portion of the work is
designing the test so you pick the right input values to fully test
the function's behavior, e.g., error cases and other edge values.
That is the core essential work, regardless of what kind of automation
we use or don't use.  That is where the real mental work occurs.


There exist spreadsheets in Bugzilla, which I had made to show improvements, but they can be adapted to work as test, for example attachment https://issues.apache.org/ooo/attachment.cgi?id=51549 from issue https://issues.apache.org/ooo/show_bug.cgi?id=15090.

If we go with test documents, we need a place, where such documents can be uploaded, and can be easily downloaded, and can be referenced. We need an overview, which tests already exist. We need a naming schema, to make them easy identify in directories too. And we should have a common style, how to indicate errors. I can start when we have agreed on such environment settings.

Kind regards
Regina


---------------------------------------------------------------------
To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org
For additional commands, e-mail: qa-h...@openoffice.apache.org

Reply via email to