Hello,

To answer Spencer Graves questions, I would like to mention that there an alternative to RUnit that could easy writing of test units, regression tests and integration tests for R. It is svUnit. See http://www.sciviews.org/SciViews-K/index.html. It is not on CRAN yet, but on R-Forge because it is still in development. You can install it by:

install.packages("svUnit",repos="http://R-Forge.R-project.org";)

It is fully compatible with RUnit test units, that is, the checkXXX() functions, and .setUp()/.tearDown() are fully supported. Indeed, my first goal was to use RUnit and build a GUI on top of it... but for several reasons, it was not possible and I had to write my custom checkXXX() functions (same interface, but totally different internal code).

svUnit offers original ways (for R microcosm) to define, run and view your test suites. Here are some of them:

1) You do not need to write a test suite file on disk to test an R object. Suppose you have your own function like this:

> Square <- function(x) return(x^2)

You can simply attach a test unit to this object by:

> test(Square) <- function() {
+     checkEquals(9, Square(3))
+     checkEquals(10, Square(3))  # This intentionally fails
+     checkEquals(9, SSSquare(3)) # This intentionally raises error
+     checkEquals(c(1, 4, 9), Square(1:3))
+     checkException(Square("xx"))
+ }

And you run it as easily. You must first know that test unit failures, errors and other data concerning your tests are logged globally. You clean the log by clearLog() and you look at it by Log(), or perhaps summary(Log()) if you accumulated a lot of tests in the logger. So, to test your Square() function only, you do:

> clearLog()
> runTest(Square)
> Log()
= A svUnit test suite run in less than 0.1 sec with:

* test(Square) ... **ERROR**


== test(Square) run in less than 0.1 sec: **ERROR**

//Pass: 3 Fail: 1 Errors: 1//

* : checkEquals(10, Square(3)) run in 0.002 sec ... **FAILS**
Mean relative difference: 0.1
 num 9

* : checkEquals(9, SSSquare(3)) run in less than 0.001 sec ... **ERROR**
Error in mode(current) : could not find function "SSSquare"

2) You can attach test units to any kind of objects, and even, you can define stand-alone tests (like integration or regression tests, for instance):

> test_Integrate <- svTest(function() {
+         checkTrue(1 < 2, "check1")
+         v <- 1:3        # The reference
+         w <- 1:3        # The value to compare to the reference
+         checkEquals(v, w)
+ })

then:

> runTest(test_Integrate)
> Log() # Note how test results accumulate in the logger
(output not shown)

3) On the contrary to RUnit, you can even run the checkXXX() functions everywhere, and their test results will also accumulate in the logger (but then, there is no context associated to the test and the title is just "eval run in ... ":

> checkTrue(1 < 2)
> Log()
(output not shown)

4) You have convenient functions to catalog all available test units/test suites (in R packages, in objects in memory, integration tests, ...). You even can manage exclusion lists (by default, test suites defined in svXXX packages and RUnit are excluded). So:

> (oex <- getOption("svUnit.excludeList")) # Excluded items (regexp)
[1] "package:sv"    "package:RUnit"
> # clear the exclusion list and list all available test units
> options(svUnit.excludeList = NULL)
> svSuiteList() # Note that our test functions are also listed
A svUnit test suite definition with:

- Test suites:
[1] "package:svUnit"                "package:svUnit (VirtualClass)"

- Test functions:
[1] "test(Square)"   "test_Integrate"

> # Restore previous exclusion list
> options(svUnit.excludeList = oex)

Lokk at ?svSuite for more explanations.

5) You can easily transform a test associated with an object into a RUnit-compatible test file on disk:

> unit <- makeUnit(Square)
> file.show(unit, delete.file = TRUE)

6) You can easily integrate the svUnit test in R packages and in R CMD check mechanism with silent full test of your units in case of no errors or failures, but break of R CMD check with extensive report in case of problems. Just define a .Rd page (named 'unitTests.mypkg', by default) and write an example section to run the test units you want. Here is an example:

-----------------------------
\name{unitTests}
\alias{unitTests.svUnit}

\title{ Unit tests for the package svUnit }
\description{
  Performs unit tests defined in this package by running
  \code{example(unitTests.svUnit)}. Tests are in \code{runit*.R} files
  located in the '/unitTests' subdirectory or one of its subdirectories
  ('/inst/unitTests' and subdirectories in package sources).
}

\author{Me (\email...@mysite.org})}

\examples{
library(svUnit)
# Make sure to clear log of errors and failures first
clearLog()

# Run all test units defined in the 'svUnit' package
(runTest(svSuite("package:svUnit"), "svUnit"))

\donttest{
# Tests to run with example() but not with R CMD check
# Run all test units defined in the /unitTests/VirtualClass subdir
(runTest(svSuite("package:svUnit (VirtualClass)"), "VirtualClass"))
}

\dontrun{
# Tests to present in ?unitTests.svUnit but to never run automatically
# Run all currently loaded test cases and test suites of all loaded packages
(runTest(svSuiteList(), "AllTests"))
}

\dontshow{
# Put here test units you want to run during R CMD check but don't want to show
# or run with example(unitTests.svUnit)
}

# Check errors (needed to interrupt R CMD check in case of problems)
errorLog()
}

\keyword{utilities}
---------------------------

Note also that this mechanism allows to run a selection of your package's test units through example(unitTests.mypkg), which is convenient for further interactive analysis of the problem(s), for instance. Look at the source code of the svUnit package for an example of such test units integration into R package build/check mechanism.

7) svUnit fully integrates within the SciViews/Komodo Edit GUI/IDE. You then have a completely graphical presentation of your tests and automated execution of your tests while you type your R code. See the screenshot at http://www.sciViews.org, 'RUnit' tab at right of the Komodo Edit window. There is a simple interface available to implement similar panels in other GUI/IDE.

8) svUnit reports also nicely integrate in DokuWiki engines, or any creole-compatible wiki engine (like the R Wiki, http://wiki.r-project.org) for web-based test units.

The code is complete, but considered in beta stage, currently. A vignette is in preparation for this package. We are looking for comments and testers before we release this package on CRAN. Note that this package should be considered as complimentary to RUnit. Depending on the way you want to run your test units, you can use either svUnit, or RUnit, or both with the same test code.

Best,

Philippe Grosjean

..............................................<°}))><........
 ) ) ) ) )
( ( ( ( (    Prof. Philippe Grosjean
 ) ) ) ) )
( ( ( ( (    Numerical Ecology of Aquatic Systems
 ) ) ) ) )   Mons-Hainaut University, Belgium
( ( ( ( (
..............................................................

Spencer Graves wrote:
Hi, All:
What support exists for 'regression testing' (http://en.wikipedia.org/wiki/Regression_testing) of R code, e.g., as part of the "R CMD check" process? The"RUnit" package supports "unit testing" (http://en.wikipedia.org/wiki/Unit_testing). Those concerned about software quality of code they use regularly could easily develop their own "softwareChecks" package that runs unit tests in the "\examples". Then each time a new version of the package and / or R is downloaded, you can do "R CMD check" of your "softwareChecks": If it passes, you know that it passed all your checks.

I have not used "RUnit", but I've done similar things computing the same object two ways then doing "stopifnot(all.equal(obj1, obj2))". I think the value of the help page is enhanced by showing the "all.equal" but not the "stopifnot". I achieve this using "\dontshow" as follows:

          obj1 <- ...
          obj2 <- ...
          \dontshow{stopifnot(}
          all.equal(obj1, obj2)
          \dontshow{)}.

Examples of this are contained, for example, in "fRegress.Rd" in the current "fda" package available from CRAN or R-Forge.

     Best Wishes,
     Spencer

Jb wrote:
Hi all,
One relatively easy solution is to include R and all relevant versions of packages used in (a) a reproduction archive and/or (b) packaged inside a virtual machine. With storage space cheap and sources of both R and packages available (and easy free crossplatform virtual machine solutions and Linux) one can distribute not only ones own code and data but also all that was required to do the analyses down to the OS.

So far in our own work we've just included relevant package versions but we will probably start include R as well for next projects.

Hope this brainstorm helps (and credit to Ben Hansen and Mark Fredrickson for these ideas).

Jake

Jake Bowers
http://jakebowers.org

On Jan 10, 2009, at 1:19 PM, "Nicholas Lewin-Koh" <ni...@hailmail.net> wrote:

Hi,
Unfortunately one of the the cornerstones of the validation paradigm
in the clinical world (as I understand it) is that
consistency/repeatability,
documentation (of the programs consistency and maintenance) and
adherence to regulatory
requirements take supremacy even ironically over correctness. So you
still
get ridiculous requirements from regulatory bodies like type III sums of
squares, Last observation carried forward, etc. These edifices are
very hard to change, I know of people who have worked their whole
careers just to get the FDA to allow other treatment of missing data.

So what does this have to do with R? This comes down to the point
you made below about the R development cycle incorporating bug fixes
into new releases, and not supporting old versions. I think this has
been rehashed many times, and is not likely to change. So how to
move R into the clinic? From a practical perspective all the
development and interoperability features of R are very nice,
but how to maintain things in a way that if the underlying
R platform changes the tool or method does not, and furthermore
how to manage this in a cost effective way so that it can't
be argued that it is cheaper to pay for SAS???

These are not necessarily questions that R core has to answer,
as the burden of proof of validation is really in the hands of the
company/organization doing the submission. We just like to pretend that
the large price we pay for our SAS support means we can shift liability
:)

Rambling again,
Nicholas


On Fri, 09 Jan 2009 17:07:31 -0600, "Kevin R. Coombes"
<krcoom...@mdacc.tmc.edu> said:
Hi Nicholas,

You raise a very good point. As an R user (who develops a couple of
packages for our own local use), I sometimes find myself cringing in
anticipation of a new R (or BioConductor) release. In my perception
(which is almost certainly exaggerated, but that's why I emphasize that
it is only an opinion), clever theoretical arguments in favor of
structural changes have a tendency to outweigh practical considerations
of backwards compatibility.

One of my own interests is in "reproducible research", and I've been
pushing hard here at M.D. Anderson to get people to use Sweave to
enhance the reproducibility of their own analyses. But, more often than
I would like, I find that reports written in Sweave do not survive the
transition from one version of R to the next, because either the core
implementation or one of the packages they depend on has changed in some
small but meaningful way.

For our own packages, we have been adding extensive regression testing
to ensure that the same numbers come out of various computations, in
order to see the effects of either the changes that we make or the
changes in the packages we depend on.  But doing this in a nontrivial
way with real data leads to test suites that take a long time to run,
and so cannot be incorporated in the nightly builds used by CRAN.

We also encourage our analysts to include a "sessionInfo()" command in
an appendix to each report so we are certain to document what versions
of packages were used.

I suspect that the sort of validation you want will have to rely on an
extensive regression suite test to make certain that the things you need
remain stable from one release to another. That, and you'll have to be
slow about upgrading (which may mean foregoing support from the mailing
lists, where a common refrain in response to bug reports is that "you
aren't using the latest and greatest version", without an appreciation
of the fact that there can be good reasons for not changing something
that you know works....).

Best,
   Kevin

Nicholas Lewin-Koh wrote:
Hi,
Kudos, nice exposure, but to make this more appropriate to R-devel I
would just
like to make a small comment about the point made by the SAS executive
about getting
on an airplane yada yada ...

1) It would seem to me that R has certification documents
2) anyone designing airplanes, analyzing clinical trials, etc. had
  better be worried about a lot more than whether their software is
  proprietary.

So from that point of view it would seem that R has made great strides
over
the last 5 years especially in establishing a role for open source
software solutions in regulated/ commercial
environments. The question now is how to meld the archiac notions of
validation and
and verification seen in industry with the very different model of open
source
development? Rather than the correctness of the software, in which I
think R is competitive,
it is how to deal with the rapid release cycles of R, and the
contributed packages.
We pull our hair out in pharma trying to figure out how we would ever
reconcile CRAN and validation requirements. I have no brilliant
soulution,
just food for thought

Nicholas
------------------------------
Message: 5
Date: Thu, 8 Jan 2009 13:02:55 +0000 (GMT)
From: Prof Brian Ripley <rip...@stats.ox.ac.uk>
Subject:Re: [Rd]  NY Times article
To: Anand Patil <anand.prabhakar.pa...@gmail.com>
Cc: r-devel@r-project.org
Message-ID: <alpine.lfd.2.00.0901081258470.5...@auk.stats.ox.ac.uk>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

It has been all over R-help, in several threads.

https://stat.ethz.ch/pipermail/r-help/2009-January/184119.html
https://stat.ethz.ch/pipermail/r-help/2009-January/184170.html
https://stat.ethz.ch/pipermail/r-help/2009-January/184209.html
https://stat.ethz.ch/pipermail/r-help/2009-January/184232.html
https://stat.ethz.ch/pipermail/r-help/2009-January/184237.html

and more

On Thu, 8 Jan 2009, Anand Patil wrote:

Sorry if this is spam, but I couldn't see it having popped up on the list
yet.
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?emc=eta1

Anand

   [[alternative HTML version deleted]]

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

--
Brian D. Ripley,                  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595


______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to