[Rd] NEWS, WISHLIST, THANKS

2005-06-09 Thread Gabor Grothendieck
I have NEWS, WISHLIST and THANKS files in the 'dyn' package
in the same directory as the DESCRIPTION file but I noticed that they
did not survive the move to CRAN, at least on Windows.
How do I incorporate them so that they are not omitted?

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Suggestion: help(package name)

2005-06-07 Thread Gabor Grothendieck
On 6/7/05, Robert Gentleman [EMAIL PROTECTED] wrote:
 
 
 Robin Hankin wrote:
  My 0.02:
 
  I use Misc.Rd for the purpose that Duncan suggests.  I put things like
  details and rationale for package
  organization, pointers to the most important function(s) in the
  package,  and perhaps function descriptors
  for ubiquitous functions that don't warrant their own helppage, but need
  documentation [in
  the case of gsl, this would be strictify() and process.args(), which
  every user needs to know].
 
  It would be *great* to be required to put in package.gsl.R  (or should
  that be gsl.package.Rd?)
  for this purpose.  Then maybe R CMD check could check for its presence
  and throw a wobbly
  if it isn't there.
 
 
 Hi,
  Well, I pretty strenuously object. That is just what vignettes are for
 and the last thing I need is more wobbly's from R CMD check.
 
 Function documentation should document functions. If you want to

'?' or 'help' documents topics, as I under it, not necessarily functions.  For
example,

?iris
?Startup

Further, '?' has a type?topic syntax, as well.

Also, what is a wobbly?

 document something more substantial then please use the tools already
 provided to do that - and if you don't want to, and you want to make use
 of the tools for function documentation in some other way please don't
 try to impose your version of what should happen on others.
 
 Best wishes
   Robert
 
  Some packages have so  much material that it's difficult to know where
  the meat of the functionality lies,
  and Duncan's suggestion would help greatly in these circumstances.
 
  best wishes
 
  rksh
 
 
  On Jun 7, 2005, at 01:11 pm, Duncan Murdoch wrote:
 
  Kurt Hornik wrote:
 
  Henrik Bengtsson writes:
 
  Hi,
  I would like to suggest a standard where all packages provide an Rd
  page with the same name (or aliased) as the name of package so that
  help(package name) or ?package name is always here. This
  especially of interest to large packages with a large package index.
  This page could explain the package in general and gives some hints
  on how to start - not like extensive vignettes, but just to get
  started, e.g. list the most important functions.  This page could
  typically contain information that is in the DESCRIPTION file (which
  contains valuable information hardly every accessed by a general
  user), such as who is the maintainer, how to report bugs and so on.
 
 
  I think this is a good idea.  One minor problem is that for some
  packages that topic name is already in use for a function (e.g. boot).
  For that reason, I'd suggest that there *also* be an alias called
  package.package name, and the package name topic should link to it.
 
  How would this be different from the results of
  help(package = package name)
  ?
 
 
 
  1.  It would work with ?, like other help topics.
 
  2.  It would give an overview.  It's possible to do that in
  DESCRIPTION or INDEX, but you don't get the same style as for other
  help files (e.g. no links to other topics, at least in Windows).
 
 
 
  We should work out what the topic headings should be and extend
  package.skeleton() and prompt() to write a bare-bones file that
  suggests the questions that need to be answered in the file.  The
  headings I'd suggest are:
 
  \name
  \title
  \alias
  \description (longer than the typical entry in the DESCRIPTION file)
  \details (Should give a short guide to the main functions, should
  point out the existence of external documentation like vignettes, etc.)
  \author (could also describe maintainer, if different)
  \references
  \seealso (Should give references to related packages)
  \examples
  \keywords
 
  There is some duplication of material from DESCRIPTION, but usually
  this should be longer and more reader-friendly than that file.
 
  I'd be happy to write the description of this in R Extensions, and
  write the changes to prompt(), if we have agreement that this file
  should be mandatory in 2.2.x or 2.3.x, and you'll write the checks for
  it.  (I think the check should just be for existence of aliases
  package name and package.package name, and could perhaps just give
  a warning in 2.2.x.)
 
  Duncan Murdoch
 
  __
  R-devel@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-devel
 
 
  --
  Robin Hankin
  Uncertainty Analyst
  National Oceanography Centre, Southampton
  European Way, Southampton SO14 3ZH, UK
   tel  023-8059-7743
 
  __
  R-devel@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-devel
 
 
 __
 R-devel@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-devel


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Suggestion: help(package name)

2005-06-07 Thread Gabor Grothendieck
Currently methods?e will look for the alias e-methods so perhaps package?e 
could look for alias e-package.

On 6/7/05, Achim Zeileis [EMAIL PROTECTED] wrote:
 On Tue, 7 Jun 2005 18:43:37 +0200 Martin Maechler wrote:
 
   Duncan == Duncan Murdoch [EMAIL PROTECTED]
   on Tue, 07 Jun 2005 12:12:57 -0400 writes:
 
  .
 
   The current .Rd files don't just document functions, they also
  document  data objects and classes.
  
   But the main point here is that it's not good to have multiple
   disconnected sets of documentation for a package.  Users
  should be able  to say the equivalent of give me help on foo,
  and get help on foo,  whether it's a function, a data object, a
  package, a method, a class, or  whatever.  It's a bad design to
  force them to ask for the same sort of  thing in different ways
  depending on the type of thing they're asking for.
  ... On 6/7/2005 11:59 AM, Robert Gentleman wrote:
 
  
   Hi Duncan and others,
   I think they are linked. There are tools available both in R
  and in  Bioconductor and some pop things up and some don't. It
  doesn't take much  work to add vignettes to the windows menu bar
  - as we have done in BioC  for some time now - it would be nice
  if this was part of R, but no one  seems to have been interested
  in achieving that. Fixing the help system  to deal with more
  diverse kinds of help would be nice as well - but  taking one
  part of it and saying, now everyone must do it this way is 
  not that helpful.
 
   I respectfully disagree about the main point. My main point is,
  I  don't want more things imposed on me; dealing with  R CMD
  check is  enough of a burden in its current version, without
  someone deciding that  it would be nice to have a whole bunch
  more requirements. Folks should  feel entirely free to do what
  they want - but a little less free to tell  me what I should be
  doing.
 
  Duncan And I disagree pretty strenuously about that.  One
  Duncan of the strengths of R is that it does impose
  Duncan standards on contributed packages, and these make
  Duncan them easier to use, less likely to conflict with
  Duncan each other, and so on.
 
  Duncan We shouldn't impose things lightly, but if they do
  Duncan make packages better, we should feel no reason not
  Duncan to tell you what you should be doing.
 
  As Kurt mentioned early in this thread, we currently have
  the auto-generated information from
  either
 
  help(package = pkgname)# or (equivalently!)
  library(help = pkgname)
 
  which shows
DESCRIPTION +
(user-written/auto-generated) INDEX +
mentions vignettes and other contents in inst/doc/
 
  Now if Duncan would write some R code that produces a
  man/pkgname.Rd file from the above information
 
 I would like to second what Gabor said earlier in this thread: we cannot
 simply create man/pkgname.Rd because this will already exist for many
 packages. Examples that come to my mind include: betareg, chron,
 ellipse, flexmix, ineq, zoo, and many more. Renaming the package is not
 an option, so probably the man page has to be renamed to something like
 man/pkgname.package.Rd, say. And then doing
  help(package = foo)
 and
  help(foo.package)
 is not that much of a difference, is it? Personally, I find the former
 more intuitive.
 Z
 
  -- and as he mentioned also
  added some of that functionality to package.skeleton(),
  I think everyone could become happy, i.e.,
  we could improve the system in the future with only a very light
  burden on the maintainers of currently existing packages: You'd
  have to run the new R function only once for every package you
  maintain.
 
  Also, the use of a user-written INDEX file could eventually
  completely be abandoned in favor of maintaining
  man/pkgname.Rd, which is much nicer;
  I'd welcome such a direction quite a bit.
 
  And as much as I do like (and read) the vignettes that are
  available, I also do agree that writing one other *.Rd file is
  easier for many new package authors than to write a
  vignette -- the package author already had to learn *.Rd syntax
  anyway -- and it's nice to be able to produce something where
  hyperlinks to the other existing reference material (ie. help
  pages) just works out of the box.
 
  OTOH, we should still keep in mind that it's worth to try to
  get  bi-directional linking between (PDF) vignettes and help
  files  (assuming all relevant files are installed by R CMD
  INSTALL of course).
 
  Martin
 
  Duncan Currently R has 3 types of help: the .Rd files in
  Duncan the man directory (which are converted into plain
  Duncan text, HTML, compiled HTML, LaTex, DVI, PDF, etc),
  Duncan the vignettes, and unstructured files in inst/doc.
  Duncan We currently require .Rd files for every function
  Duncan and data 

Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-07 Thread Gabor Grothendieck
On 6/7/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi
 
 
 Gabor Grothendieck wrote:
  On 6/6/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 Hi
 
 
 Gabor Grothendieck wrote:
 
 On 6/2/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 
 Hi
 
 
 Thanks.  I have mucked around in vpTree structures and discovered its
 actually quite easy to specify children so I have changed my example
 so that instead of naming the children of 'layout' and then remembering
 coordinates linked to the names of the children of 'layout' in
 the 'coords' structure (which really just duplicates state information
 already available in grid) it simply follows the order
 of the children of 'layout' directly.  This permits elimination of 'coords'
 and the two naming functions.  Using the depth approach you advocate,
 'with' also becomes shorter and I think I have it to the point where it 
 works
 with both vpPath and viewport classes.  Once Deepayan implements
 the use.viewport= argument to print, 'with' can be eliminated too.  No
 questions this time but I thought I would post the latest version for
 completeness. Regards.
 
 
 Ok.  I can see this working ... for now.  The disadvantage with this
 approach is that it makes use of the undocumented, internal structure of
 a viewport tree to grab a list of child viewports.  A worse example of
 the same thing is that the with() methods make use of a happy
 coincidence that both viewport objects and viewport path objects share a
 component called name (and an even happier coincidence that they
 contain the same information).  I think it would be cleaner and better
 practice, despite requiring longer code, to make use of the documented
 interface which requires specifying viewport names and viewport paths.
 The internal structure of objects is not guaranteed to be stable.
 
 
  Perhaps accessor functions could be provided that allow one to
  retrieve the name of a viewport and the name of a vpPath in
  a safe way.  These could be as simple as:
 
  names.viewport - names.vpPath - function(x) x$name
 
 
 Fair enough.  If I say use the API, I should provide a useful API :)
 
 This is a reasonable request for viewports;  the name component of a
 viewport is a sensible thing to want.
 
 OTOH, it is not necessarily reasonable for a viewport path;  not all
 components of an object should necessarily have accessors.  The name
 component of a viewport path is the last element in the path.  Perhaps
 an API should be supplied for extracting parts of a viewport path, but
 it should probably be something along the lines of car()/cdr() or
 head()/tail() or explode() to get different bits of the path.
 
 Accessing the children of a viewport is subtly problematic too.
 Directly accessing the children slot and using the order of the
 children in that slot is dangerous because there is no claim made by
 the system as to how the children are internally ordered.  Again, it
 works currently, but it makes incorrect assumptions about what the
 system is doing internally so is vulnerable to future changes.

That is the point of an accessor.  If the internals change then the
accessor is modified to hide the change so that the user using the
accessor is not impacted.  

It seems that grid already partly supports this with the childNames
function.  It could be made generic and a method provided
to cover the classes discussed here too.

 
 So again, the recommended approach is to use the API provided;  you
 provide the naming scheme for viewports and you control the order in
 which viewports are used.
 
 Paul
 
 
  Similarly an accessor function to safely retrieve the children would
  be nice.Again, it should ideally be possible to
  have a generic with methods for various grid classes.
 
  Then the  relevant line in the code concerning name
  could be written in a safe way like this:
 
  depth - if (data$name == ROOT) 0 else downViewport(names(data))
 
  and similarly for the children.
 
 
 
 Paul
 
 
 
 [pushLayout is same as before except there are no names on the
 children of 'layout' and the rest is new]
 
 library(grid)
 library(lattice)
 
 pushLayout - function(nr, nc, name=layout) {
   pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
   for (i in 1:nr) {
 for (j in 1:nc) {
   pushViewport(viewport(layout.pos.row=i, layout.pos.col=j))
   upViewport()
 }
   }
   upViewport()
 }
 
 with.vpPath - with.viewport - function(data, expr, ...) {
   # if data is a vpPath it cannot be ROOT since NULL will never
 dispatch here
   depth - if (data$name == ROOT) 0 else downViewport(data$name)
   result - eval.parent(substitute(expr))
   upViewport(depth)
   invisible(result)
 }
 
 grid.newpage()
 
 # specify number of cells to fill and number of rows
 n - 5; nr - 3
 
 nc - ceiling(n/nr)
 downViewport(pushLayout(nr, nc))
 
 vpt - current.vpTree(all = FALSE)
 for(k in 1:n) with(vpt$children[[k]],
   print( xyplot(v ~ v, list(v = 1:k)), newpage = FALSE )
 )
 
 
 --
 Dr Paul

Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-07 Thread Gabor Grothendieck
Here is the code once again.  This time I have supplied two 
names methods and a getChildren.viewport function to 
encapsulate the corresponding grid internals.  It would
be easiest if grid provided these itself but in the absence
of that this does encapsulate dependencies on grid
internals to a well defined set of functions.   Note that
names is only used in 'with' and 'with' will be eliminated
once Deepayan adds the use.viewport= (or whatever
its called) to print.   I am not sure from your response
whether or not you intend to add these items to the grid
API but in any case this provides an intermediate
level of safety.

library(grid)
library(lattice)

pushLayout - function(nr, nc, name=layout) {
  pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
  for (i in 1:nr) {
for (j in 1:nc) {
  pushViewport(viewport(layout.pos.row=i, layout.pos.col=j))
  upViewport()
}
  }
  upViewport()
}

names.vpPath - names.viewport - function(x) x$name

with.vpPath - with.viewport - function(data, expr, ...) {
  # if data is a vpPath it cannot be ROOT since  
  # NULL will never dispatch here
  depth - if (data$name == ROOT) 0 else downViewport(names(data))
  result - eval.parent(substitute(expr))
  upViewport(depth)
  invisible(result)
}

getChildren.viewport - function(x) x$children

grid.newpage()

# specify number of cells to fill and number of rows
n - 5; nr - 3

nc - ceiling(n/nr)
downViewport(pushLayout(nr, nc))

vpt - current.vpTree(all = FALSE)
for(k in 1:n) with(getChildren.viewport(vpt)[[k]],
  print( xyplot(v ~ v, list(v = 1:k)), newpage = FALSE )
)

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-07 Thread Gabor Grothendieck
Yes, I understand that although such order is convenient for
the user as the significant reduction in code size here shows.  I
wonder if there might be some performance parameter (e.g. hash)
to control it.  If hash = TRUE then no guarantee is provided.  Otherwise
order is kept.

On 6/7/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi
 
 
 Gabor Grothendieck wrote:
  On 6/7/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 Hi
 
 
 Gabor Grothendieck wrote:
 
 On 6/6/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 
 Hi
 
 
 Gabor Grothendieck wrote:
 
 
 On 6/2/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 
 
 Hi
 
 
 Thanks.  I have mucked around in vpTree structures and discovered its
 actually quite easy to specify children so I have changed my example
 so that instead of naming the children of 'layout' and then remembering
 coordinates linked to the names of the children of 'layout' in
 the 'coords' structure (which really just duplicates state information
 already available in grid) it simply follows the order
 of the children of 'layout' directly.  This permits elimination of 
 'coords'
 and the two naming functions.  Using the depth approach you advocate,
 'with' also becomes shorter and I think I have it to the point where it 
 works
 with both vpPath and viewport classes.  Once Deepayan implements
 the use.viewport= argument to print, 'with' can be eliminated too.  No
 questions this time but I thought I would post the latest version for
 completeness. Regards.
 
 
 Ok.  I can see this working ... for now.  The disadvantage with this
 approach is that it makes use of the undocumented, internal structure of
 a viewport tree to grab a list of child viewports.  A worse example of
 the same thing is that the with() methods make use of a happy
 coincidence that both viewport objects and viewport path objects share a
 component called name (and an even happier coincidence that they
 contain the same information).  I think it would be cleaner and better
 practice, despite requiring longer code, to make use of the documented
 interface which requires specifying viewport names and viewport paths.
 The internal structure of objects is not guaranteed to be stable.
 
 
 Perhaps accessor functions could be provided that allow one to
 retrieve the name of a viewport and the name of a vpPath in
 a safe way.  These could be as simple as:
 
 names.viewport - names.vpPath - function(x) x$name
 
 
 Fair enough.  If I say use the API, I should provide a useful API :)
 
 This is a reasonable request for viewports;  the name component of a
 viewport is a sensible thing to want.
 
 OTOH, it is not necessarily reasonable for a viewport path;  not all
 components of an object should necessarily have accessors.  The name
 component of a viewport path is the last element in the path.  Perhaps
 an API should be supplied for extracting parts of a viewport path, but
 it should probably be something along the lines of car()/cdr() or
 head()/tail() or explode() to get different bits of the path.
 
 Accessing the children of a viewport is subtly problematic too.
 Directly accessing the children slot and using the order of the
 children in that slot is dangerous because there is no claim made by
 the system as to how the children are internally ordered.  Again, it
 works currently, but it makes incorrect assumptions about what the
 system is doing internally so is vulnerable to future changes.
 
 
  That is the point of an accessor.  If the internals change then the
  accessor is modified to hide the change so that the user using the
  accessor is not impacted.
 
  It seems that grid already partly supports this with the childNames
  function.  It could be made generic and a method provided
  to cover the classes discussed here too.
 
 
 I agree that a childNames() method for a viewport tree is probably
 reasonable.  The subtle problem is the fact that your code makes use of
 the *order* of the names that function would return, when in fact there
 is no claim that they will be in any particular order.
 
 Paul
 
 
 So again, the recommended approach is to use the API provided;  you
 provide the naming scheme for viewports and you control the order in
 which viewports are used.
 
 Paul
 
 
 
 Similarly an accessor function to safely retrieve the children would
 be nice.Again, it should ideally be possible to
 have a generic with methods for various grid classes.
 
 Then the  relevant line in the code concerning name
 could be written in a safe way like this:
 
 depth - if (data$name == ROOT) 0 else downViewport(names(data))
 
 and similarly for the children.
 
 
 
 
 Paul
 
 
 
 
 [pushLayout is same as before except there are no names on the
 children of 'layout' and the rest is new]
 
 library(grid)
 library(lattice)
 
 pushLayout - function(nr, nc, name=layout) {
  pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
  for (i in 1:nr) {
for (j in 1:nc) {
  pushViewport(viewport(layout.pos.row=i, layout.pos.col=j

Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-07 Thread Gabor Grothendieck
Thanks.  Yet one other comment to consider when thinking
about this.  Even if its not possible
or advisable to guarantee order, even without the hash=
idea, it may be possible to guarantee that default names
are generated in some order that can be used by
getChildren to ensure that it returns the children in
the same order they are created or perhaps even some sort
of timestamp can be attached to objects to facilitate 
later traversal.

On 6/7/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi
 
 
 Gabor Grothendieck wrote:
  Here is the code once again.  This time I have supplied two
  names methods and a getChildren.viewport function to
  encapsulate the corresponding grid internals.  It would
  be easiest if grid provided these itself but in the absence
  of that this does encapsulate dependencies on grid
  internals to a well defined set of functions.   Note that
  names is only used in 'with' and 'with' will be eliminated
  once Deepayan adds the use.viewport= (or whatever
  its called) to print.   I am not sure from your response
  whether or not you intend to add these items to the grid
  API but in any case this provides an intermediate
  level of safety.
 
 
 Yep, I've made a note to look at adding these to the grid API.
 Thanks.
 
 Paul
 
 
  library(grid)
  library(lattice)
 
  pushLayout - function(nr, nc, name=layout) {
pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
for (i in 1:nr) {
  for (j in 1:nc) {
pushViewport(viewport(layout.pos.row=i, layout.pos.col=j))
upViewport()
  }
}
upViewport()
  }
 
  names.vpPath - names.viewport - function(x) x$name
 
  with.vpPath - with.viewport - function(data, expr, ...) {
# if data is a vpPath it cannot be ROOT since
# NULL will never dispatch here
depth - if (data$name == ROOT) 0 else downViewport(names(data))
result - eval.parent(substitute(expr))
upViewport(depth)
invisible(result)
  }
 
  getChildren.viewport - function(x) x$children
 
  grid.newpage()
 
  # specify number of cells to fill and number of rows
  n - 5; nr - 3
 
  nc - ceiling(n/nr)
  downViewport(pushLayout(nr, nc))
 
  vpt - current.vpTree(all = FALSE)
  for(k in 1:n) with(getChildren.viewport(vpt)[[k]],
print( xyplot(v ~ v, list(v = 1:k)), newpage = FALSE )
  )
 
 
 --
 Dr Paul Murrell
 Department of Statistics
 The University of Auckland
 Private Bag 92019
 Auckland
 New Zealand
 64 9 3737599 x85392
 [EMAIL PROTECTED]
 http://www.stat.auckland.ac.nz/~paul/
 


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Suggestion: help(package name)

2005-06-07 Thread Gabor Grothendieck
My understanding is that one could still build, install and distribute
a package that did not conform to this requirement but it would
fail R CMD CHECK.  Thus as long as you don't want to place it
in a repository that requires a clean R CMD CHECK you are
under no obligation to do it.  But if you do want to use CRAN or
other repository that enforces quality via R CMD CHECK then
the package must meet that quality and so
to that extent would this would be a requirement.

Perhaps this could even be expanded into quality levels.  In a sense this
is already the case as the level can be regarded as
whether the package:

- fails R CMD CHECK
- passes R CMD CHECK but with warnings
- fully passes R CMD CHECK

as shownn in the check summary:
   http://probability.ca/cran/src/contrib/checkSummary.html

An additional level could be added for those packages possessing 
a vignette (where the horizontal line is the threshold for CRAN
admission) but the quality would be published on the web as it
is now:

- fails R CMD check
-
- passes it with warnings
- fully passes it
- fully passes it and has a vignette

Personally I think Duncan's proposal is an excellent idea and
that the quality of packages should be driven and enforced
by the needs of the users. This makes it easier to use packages 
for users and though it requires extra work from the developers
it is not onerous.

On 6/7/05, Liaw, Andy [EMAIL PROTECTED] wrote:
 Let me add to the pot:
 
 I think Robert and Brian are against imposing additional _requirement_ on
 packages to provide an overview in .Rd, and I tend to agree with that
 sentiment.
 
 However, if such a facility is made optional (like vignettes) for package
 author/maintainer, then I have no problem with it.  Perhaps it can work like
 the CITATION file:  The package author/maintainer can choose to (or not to)
 use it.  If one is not provided in the package source, then something
 halfway sensible is auto-generated from various files (or perhaps just runs
 help(package=pkg).
 
 Or perhaps yet another function can be added to the `utils' package, like
 packageOverview(), which can either:
 - open an overview vignette if one is provided
 - open the overview .Rd in whatever format the default help is in
 - run help(package=pkg) if neither is available
 
 Just my $0.02...
 
 Andy
 
  From: Duncan Murdoch
 
  Prof Brian Ripley wrote:
   I share Robert's `pretty strenuous' objections.
  
   Adding compulsory things for package writers seems to me to
  need very
   compelling arguments.  Checking that a package does what it
  says (e.g. the
   code in vignettes can be run) is one thing, but checking it
  does things it
   does not say it wants to do is quite another.
 
  I don't understand your complaint. Could you explain what you
  meant by
  checking it does things it does not say it wants to do?
 
  My proposal (modified following the suggestions I've heard so
  far) is as
  follows:
 
- to check that a couple of help topic aliases exist (pkg.package
  and pkg)
- to recommend that pkg.package contain general information about
  the package, and that pkg be an alias for it, if it isn't used for
  some other purpose.
- to write promptPackage() to help create an initial version of
  pkg.package.Rd.  It can get some information from the DESCRIPTION
  file; perhaps it could go looking for a vignette, or the INDEX, or
- to modify the other help system tools to make use of this
  (e.g. the
  package:pkg heading on a page would become a link to the
  pkg.package
  alias, etc.)
 
  None of these things are very much work, and I'd be happy to
  do them and
  document them.  The thing that will be more work is to write the
  pkg.package.Rd files for every package. (I'd do it for the base
  packages; they'd be short.)  It won't be a huge amount of
  work for any
  one package (many of them already have the basic content in various
  places, so for those it's mostly a matter of reformatting),
  but in total
  it will be a lot.
 
  I think the benefit of this will be that the help for a package will
  show up in a standard location, using the standard method for looking
  for it.  This is not a huge benefit for any users who already
  know all
  about the current ways help can be given, but I think it
  would be a real
  benefit for users who aren't so familiar with things.   It
  would help to
  unify the help system:  everyone knows about ?topic, so providing a
  standard way for that to lead into all the rest of the documentation
  seems obviously beneficial to me.
 
  Making this optional would weaken it quite a bit.  Packages couldn't
  give links to the main page in other packages if they weren't
  guaranteed
to exist; producing the HTML would be more difficult if
  links worked
  sometimes and didn't work other times, etc.
 
  Robert Gentleman wrote:
 Let's see, some of us (three years ago) developed a tool
  to solve this
   problem.
 
  Do you mean 

Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-06 Thread Gabor Grothendieck
On 6/6/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi
 
 
 Gabor Grothendieck wrote:
  On 6/2/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 Hi
 
 
  Thanks.  I have mucked around in vpTree structures and discovered its
  actually quite easy to specify children so I have changed my example
  so that instead of naming the children of 'layout' and then remembering
  coordinates linked to the names of the children of 'layout' in
  the 'coords' structure (which really just duplicates state information
  already available in grid) it simply follows the order
  of the children of 'layout' directly.  This permits elimination of 'coords'
  and the two naming functions.  Using the depth approach you advocate,
  'with' also becomes shorter and I think I have it to the point where it 
  works
  with both vpPath and viewport classes.  Once Deepayan implements
  the use.viewport= argument to print, 'with' can be eliminated too.  No
  questions this time but I thought I would post the latest version for
  completeness. Regards.
 
 
 Ok.  I can see this working ... for now.  The disadvantage with this
 approach is that it makes use of the undocumented, internal structure of
 a viewport tree to grab a list of child viewports.  A worse example of
 the same thing is that the with() methods make use of a happy
 coincidence that both viewport objects and viewport path objects share a
 component called name (and an even happier coincidence that they
 contain the same information).  I think it would be cleaner and better
 practice, despite requiring longer code, to make use of the documented
 interface which requires specifying viewport names and viewport paths.
 The internal structure of objects is not guaranteed to be stable.

Perhaps accessor functions could be provided that allow one to 
retrieve the name of a viewport and the name of a vpPath in 
a safe way.  These could be as simple as:

names.viewport - names.vpPath - function(x) x$name

Similarly an accessor function to safely retrieve the children would 
be nice.Again, it should ideally be possible to
have a generic with methods for various grid classes.

Then the  relevant line in the code concerning name
could be written in a safe way like this:

depth - if (data$name == ROOT) 0 else downViewport(names(data))

and similarly for the children.


 
 Paul
 
 
  [pushLayout is same as before except there are no names on the
  children of 'layout' and the rest is new]
 
  library(grid)
  library(lattice)
 
  pushLayout - function(nr, nc, name=layout) {
pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
for (i in 1:nr) {
  for (j in 1:nc) {
pushViewport(viewport(layout.pos.row=i, layout.pos.col=j))
upViewport()
  }
}
upViewport()
  }
 
  with.vpPath - with.viewport - function(data, expr, ...) {
# if data is a vpPath it cannot be ROOT since NULL will never
  dispatch here
depth - if (data$name == ROOT) 0 else downViewport(data$name)
result - eval.parent(substitute(expr))
upViewport(depth)
invisible(result)
  }
 
  grid.newpage()
 
  # specify number of cells to fill and number of rows
  n - 5; nr - 3
 
  nc - ceiling(n/nr)
  downViewport(pushLayout(nr, nc))
 
  vpt - current.vpTree(all = FALSE)
  for(k in 1:n) with(vpt$children[[k]],
print( xyplot(v ~ v, list(v = 1:k)), newpage = FALSE )
  )
 
 
 --
 Dr Paul Murrell
 Department of Statistics
 The University of Auckland
 Private Bag 92019
 Auckland
 New Zealand
 64 9 3737599 x85392
 [EMAIL PROTECTED]
 http://www.stat.auckland.ac.nz/~paul/
 


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] update.packages keeps trying to update gregmisc

2005-06-06 Thread Gabor Grothendieck
If I issue the command 
 update.packages() 
it wants to update 'gregmisc' but then if I do it again right afterwards it 
still wants to update 'gregmisc'.  It should have updated everything
the first time and should not be trying to update anything the second
time.  Any comments?

Here is the transcript of a session (R version at end):


 update.packages()
gregmisc :
 Version 2.0.7 installed in C:/PROGRA~1/R/RW2010~1/library 
 Version 2.0.8 available at http://probability.ca/cran
Update (y/N/c)?  y
trying URL 
'http://probability.ca/cran/bin/windows/contrib/2.1/gregmisc_2.0.8.zip'
Content type 'application/zip' length 2465 bytes
opened URL
downloaded 2465 bytes

package 'gregmisc' successfully unpacked and MD5 sums checked

The downloaded packages are in
C:\Documents and Settings\Grothendieck\Local
Settings\Temp\Rtmp9430\downloaded_packages
updating HTML package descriptions
 update.packages()
gregmisc :
 Version 2.0.7 installed in C:/PROGRA~1/R/RW2010~1/library 
 Version 2.0.8 available at http://probability.ca/cran
Update (y/N/c)?  y
trying URL 
'http://probability.ca/cran/bin/windows/contrib/2.1/gregmisc_2.0.8.zip'
Content type 'application/zip' length 2465 bytes
opened URL
downloaded 2465 bytes

package 'gregmisc' successfully unpacked and MD5 sums checked

The downloaded packages are in
C:\Documents and Settings\Grothendieck\Local
Settings\Temp\Rtmp9430\downloaded_packages
updating HTML package descriptions
 R.version.string # Windows XP
[1] R version 2.1.0, 2005-05-14

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Re: update.packages keeps trying to update gregmisc

2005-06-06 Thread Gabor Grothendieck
One other point I just noticed.  If I run new.packages() it reports this,
namely that gregmisc has extra contents of NA.

 np - new.packages()
--- Please select a CRAN mirror for use in this session ---
Warning messages:
1: bundle 'VR' is incompletely installed in: new.packages() 
2: bundle 'VR' has extra contents 'nnet' in: new.packages() 
3: bundle 'gregmisc' is incompletely installed in: new.packages() 
4: bundle 'gregmisc' has extra contents 'NA' in: new.packages() 

On 6/6/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 If I issue the command
  update.packages()
 it wants to update 'gregmisc' but then if I do it again right afterwards it
 still wants to update 'gregmisc'.  It should have updated everything
 the first time and should not be trying to update anything the second
 time.  Any comments?
 
 Here is the transcript of a session (R version at end):
 
 
  update.packages()
 gregmisc :
  Version 2.0.7 installed in C:/PROGRA~1/R/RW2010~1/library
  Version 2.0.8 available at http://probability.ca/cran
 Update (y/N/c)?  y
 trying URL 
 'http://probability.ca/cran/bin/windows/contrib/2.1/gregmisc_2.0.8.zip'
 Content type 'application/zip' length 2465 bytes
 opened URL
 downloaded 2465 bytes
 
 package 'gregmisc' successfully unpacked and MD5 sums checked
 
 The downloaded packages are in
C:\Documents and Settings\Grothendieck\Local
 Settings\Temp\Rtmp9430\downloaded_packages
 updating HTML package descriptions
  update.packages()
 gregmisc :
  Version 2.0.7 installed in C:/PROGRA~1/R/RW2010~1/library
  Version 2.0.8 available at http://probability.ca/cran
 Update (y/N/c)?  y
 trying URL 
 'http://probability.ca/cran/bin/windows/contrib/2.1/gregmisc_2.0.8.zip'
 Content type 'application/zip' length 2465 bytes
 opened URL
 downloaded 2465 bytes
 
 package 'gregmisc' successfully unpacked and MD5 sums checked
 
 The downloaded packages are in
C:\Documents and Settings\Grothendieck\Local
 Settings\Temp\Rtmp9430\downloaded_packages
 updating HTML package descriptions
  R.version.string # Windows XP
 [1] R version 2.1.0, 2005-05-14


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] A long digression on packages

2005-06-05 Thread Gabor Grothendieck
On 6/5/05, M. Edward (Ed) Borasky [EMAIL PROTECTED] wrote:
 
 
 Duncan Murdoch wrote:
 
  Of course, with disk sizes as they are now, it's not unreasonable to
  install all of the contributed CRAN packages on a PC.  Then
  help.search() *will* do searches through them all.
 
 Some of them are very specialized, and some of them have non-CRAN
 dependencies. I've done a few load everything from CRAN operations on
 my Linux boxes, only to overflow the warnings list with missing Linux
 software. And, as an example, I have zero use for molecular biology
 packages.
 
 Dirk Eddelbuettel has done a lot of work integrating the CRAN and other
 R package collections with the Debian GNU/Linux package management
 system. This rather neatly solves the non-CRAN dependency problems, at
 least for Debian.
 
 Other people have done similar things for Perl packages and Common Lisp
 packages, both in Debian and in Gentoo's Portage package management
 system. CRAN could easily be integrated into Portage, but nobody has
 stepped forward to volunteer. Maybe when I retire ... :)
 
 And where does this leave Windows users? There's nothing like Debian or
 Portage for them; CRAN would have to build it from scratch.

I think that some time ago there was a discussion of having a downloadable
file that oould be used to help.search through so that a relatively small
download and no package installation would allow a comprehensive
offline help.search of all CRAN packages.   An online version of 
help.search might be another possibility.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-03 Thread Gabor Grothendieck
On 6/2/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi

Thanks.  I have mucked around in vpTree structures and discovered its
actually quite easy to specify children so I have changed my example
so that instead of naming the children of 'layout' and then remembering 
coordinates linked to the names of the children of 'layout' in 
the 'coords' structure (which really just duplicates state information
already available in grid) it simply follows the order
of the children of 'layout' directly.  This permits elimination of 'coords' 
and the two naming functions.  Using the depth approach you advocate,
'with' also becomes shorter and I think I have it to the point where it works 
with both vpPath and viewport classes.  Once Deepayan implements 
the use.viewport= argument to print, 'with' can be eliminated too.  No 
questions this time but I thought I would post the latest version for
completeness. Regards.

[pushLayout is same as before except there are no names on the
children of 'layout' and the rest is new]

library(grid)
library(lattice)

pushLayout - function(nr, nc, name=layout) {
  pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
  for (i in 1:nr) {
for (j in 1:nc) {
  pushViewport(viewport(layout.pos.row=i, layout.pos.col=j))
  upViewport()
}
  }
  upViewport()
}

with.vpPath - with.viewport - function(data, expr, ...) {
  # if data is a vpPath it cannot be ROOT since NULL will never
dispatch here
  depth - if (data$name == ROOT) 0 else downViewport(data$name)
  result - eval.parent(substitute(expr))
  upViewport(depth)
  invisible(result)
}

grid.newpage()

# specify number of cells to fill and number of rows
n - 5; nr - 3

nc - ceiling(n/nr)
downViewport(pushLayout(nr, nc))

vpt - current.vpTree(all = FALSE)
for(k in 1:n) with(vpt$children[[k]],
  print( xyplot(v ~ v, list(v = 1:k)), newpage = FALSE )
)

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-02 Thread Gabor Grothendieck
On 6/1/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi
 
 
 Gabor Grothendieck wrote:
  [moved from r-help to r-devel]
 
  On 5/31/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 
# mm.row[j] gives the row in the layout of the jth cell
# mm.col[j] gives the col in the layout of the jth cell
mm - matrix(seq(nr*nc), nr, nc)
mm.row - c(row(mm))
mm.col - c(col(mm))
 
   # go to next cell in the array
j - j + 1 # increment position
   pushViewport(viewport(layout.pos.row = mm.row[j], layout.pos.col = 
  mm.col[j]))
 
 Is that how to do it or is there some layout/mfcol-like way?
 
 
 That is how to do it.
 
 As far as grid is concerned, all viewports are equal and grid has no
 idea whether a viewport corresponds to a plot region or a margin or
 whatever, so grid has no concept of which viewport is the next one to use.
 
 
 
  OK. Thanks.  One suggestion.  Maybe the cells in a layout could have
  an order to them and there could be an optional argument that takes a linear
  index directly allowing easy linear traversals:
 
  for(i in seq(nr*nc)) {
 pushViewport(viewport(i)) # might need different syntax here
 xyplot(seq(i) ~ seq(i))
 popViewport()
  }
 
 
 I think this sort of thing can easily be built on top rather than into
 the existing system.  For example, here's a function that pushes all of
 the basic cells in a layout using a simple naming convention:
 
 layoutVPname - function(i, j) {
   paste(layoutViewport, i, ,, j, sep=)
 }
 
 layoutVPpath - function(i, j, name=layout) {
   vpPath(name, layoutVPname(i, j))
 }
 
 pushLayout - function(nr, nc, name=layout) {
   pushViewport(viewport(layout=grid.layout(nr, nc),
 name=name))
   for (i in 1:nr) {
 for (j in 1:nc) {
   pushViewport(viewport(layout.pos.row=i,
 layout.pos.col=j,
 name=layoutVPname(i, j)))
   upViewport()
 }
   }
   upViewport()
 }
 
 And here's a use of the function to push lots of layout cells, then draw
 lattice plots in different cells using downViewport() to go to the cell
 with the appropriate name.  In this case, we use cells by column, but
 simply reverse the order of the loops to use cells by row.
 
 pushLayout(2, 3)
 for (i in 1:2) {
   for (j in 1:3){
 depth - downViewport(layoutVPpath(i, j))
 print(xyplot(seq(i*j) ~ seq(i*j)), newpage=FALSE)
 upViewport(depth)
   }
 }
 
 
  and taking it one further perhaps 'with' could have a viewport method
  that automatically pushes the viewport on entry and pops or moves
  up one level on exit reducing the above to:
 
  for(i in seq(nr*nc)) with(viewport(i), xyplot(seq(i) ~ seq(i)))
 
 
 The raw grid functions have a 'vp' argument for this purpose.  It would
 be nice if lattice functions had something similar (or maybe just
 print.trellis).  Here's your example using the 'vp' argument to
 grid.text() (and using the layout that was pushed above) ...
 
 for (i in 1:2) {
   for (j in 1:3){
 grid.text(i*j, vp=layoutVPpath(i, j))
   }
 }
 



The following includes an implementation of 'with.vpPath'.
I got some strange results but by trial and error seem
to have circumvented them yet I am still not sure that I have
the real solution:

1. If I delete the indicated line with the comments
which special-cases ROOT then it gives the error also
shown in the comments.  Why do I have to handle ROOT specially?

2. If I know a viewport how can I find out its vpPath?

3. Will identical code to my with.vpPath work with
viewports if I relabel the name to with.viewport?
Will seekViewport work with viewport too?  The
docs say seekViewport takes a name but it seems
it at least works on a vpPath too. I would like to
be able to hand to 'with' any reasonable
grid object (vpPath, name, viewport, any other objects?)
and have it work as expected.

4. Given a viewport how can one find its vpPath?
its children? its parent?
vp - current.viewport()
vp$name # this gets name but I want entire vpPath

5. How can I pop everything I have created?  Do
I have to keep track of every viewport and then
visit each one and pop it?

Most of the code below is taken from your post
to me but 'with.vpPath' onward are new.

Thanks.


library(lattice)
library(grid)

layoutVPname - function(i, j) {
  paste(layoutViewport, i, j, sep= .)
}

layoutVPpath - function(i, j, name=layout) {
  vpPath(name, layoutVPname(i, j))
}

pushLayout - function(nr, nc, name=layout) {
  pushViewport(viewport(layout=grid.layout(nr, nc), name=name))
  for (i in 1:nr) {
for (j in 1:nc) {
  pushViewport(viewport(layout.pos.row=i,
layout.pos.col=j,
name=layoutVPname(i, j)))
  upViewport()
}
  }
  upViewport()
}

with.vpPath - function(data, expr, ...) {  # modified from drawInVP
   cur - current.viewport()
   seekViewport(data)
   result - eval.parent(substitute(expr))
   # if I comment out next line I get this error:
   # Error in downViewport.vpPath

Re: [Rd] 1/tan(-0) != 1/tan(0)

2005-06-01 Thread Gabor Grothendieck
On 6/1/05, Simon Urbanek [EMAIL PROTECTED] wrote:
 On Jun 1, 2005, at 5:50 AM, (Ted Harding) wrote:
 
  However, a query: Clearly from the above (ahich I can reproduce
  too), tan() can distinguish between -0 and +0, and return different
  results (otherwise 1/tan() would not return different results).
 
  But how can the user tell the difference between +0 amnd -0?
 
 That's indeed a good question - by definition (-0)==(+0) is true,
 -00 is false and signum of both -0 and 0 is 0.
 
 I don't see an obvious way of distinguishing them at R level. Besides
 computational ways (like the 1/tan trick) the only (very ugly) way
 coming to my mind is something like:
 a==0  substr(sprintf(%f,a),1,1)==-
 Note that print doesn't display the sign, only printf does.

On my XP machine running R 2.1.0 patched 2005-05-14

 sprintf(%f,-0)
[1] 0.00

does not print the sign.

however, the tan trick can be done without tan using just division:

R sign0 - function(x) if (x != 0) stop(x not zero) else sign(1/x)
R sign0(0)
[1] 1
R sign0(-0)
[1] -1

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-01 Thread Gabor Grothendieck
On 6/1/05, Paul Murrell [EMAIL PROTECTED] wrote:
 Hi
 
 
 Gabor Grothendieck wrote:
  [moved from r-help to r-devel]
 
  On 5/31/05, Paul Murrell [EMAIL PROTECTED] wrote:
 
 
# mm.row[j] gives the row in the layout of the jth cell
# mm.col[j] gives the col in the layout of the jth cell
mm - matrix(seq(nr*nc), nr, nc)
mm.row - c(row(mm))
mm.col - c(col(mm))
 
   # go to next cell in the array
j - j + 1 # increment position
   pushViewport(viewport(layout.pos.row = mm.row[j], layout.pos.col = 
  mm.col[j]))
 
 Is that how to do it or is there some layout/mfcol-like way?
 
 
 That is how to do it.
 
 As far as grid is concerned, all viewports are equal and grid has no
 idea whether a viewport corresponds to a plot region or a margin or
 whatever, so grid has no concept of which viewport is the next one to use.
 
 
 
  OK. Thanks.  One suggestion.  Maybe the cells in a layout could have
  an order to them and there could be an optional argument that takes a linear
  index directly allowing easy linear traversals:
 
  for(i in seq(nr*nc)) {
 pushViewport(viewport(i)) # might need different syntax here
 xyplot(seq(i) ~ seq(i))
 popViewport()
  }
 
 
 I think this sort of thing can easily be built on top rather than into
 the existing system.  For example, here's a function that pushes all of
 the basic cells in a layout using a simple naming convention:
 
 layoutVPname - function(i, j) {
   paste(layoutViewport, i, ,, j, sep=)
 }
 
 layoutVPpath - function(i, j, name=layout) {
   vpPath(name, layoutVPname(i, j))
 }
 
 pushLayout - function(nr, nc, name=layout) {
   pushViewport(viewport(layout=grid.layout(nr, nc),
 name=name))
   for (i in 1:nr) {
 for (j in 1:nc) {
   pushViewport(viewport(layout.pos.row=i,
 layout.pos.col=j,
 name=layoutVPname(i, j)))
   upViewport()
 }
   }
   upViewport()
 }
 
 And here's a use of the function to push lots of layout cells, then draw
 lattice plots in different cells using downViewport() to go to the cell
 with the appropriate name.  In this case, we use cells by column, but
 simply reverse the order of the loops to use cells by row.
 
 pushLayout(2, 3)
 for (i in 1:2) {
   for (j in 1:3){
 depth - downViewport(layoutVPpath(i, j))
 print(xyplot(seq(i*j) ~ seq(i*j)), newpage=FALSE)
 upViewport(depth)
   }
 }
 
 
  and taking it one further perhaps 'with' could have a viewport method
  that automatically pushes the viewport on entry and pops or moves
  up one level on exit reducing the above to:
 
  for(i in seq(nr*nc)) with(viewport(i), xyplot(seq(i) ~ seq(i)))
 
 
 The raw grid functions have a 'vp' argument for this purpose.  It would
 be nice if lattice functions had something similar (or maybe just
 print.trellis).  Here's your example using the 'vp' argument to
 grid.text() (and using the layout that was pushed above) ...
 
 for (i in 1:2) {
   for (j in 1:3){
 grid.text(i*j, vp=layoutVPpath(i, j))
   }
 }

Thanks, again.   I'll try modifying your example to fit my specific
application (which requires a linear column-wise traversal ending 
at the nth cell where n may be less than the number of cells in the
matrix).

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Specifying dependency on a specific patched version

2005-05-14 Thread Gabor Grothendieck
Is there a way to specify a package dependency in the Depends:
field of the DESCRIPTION file for a specific patched version of a 
specific OS?

For example, today's Windows patched release (2005-05-14) has
the capability of changing the time zone within R on Windows
via Sys.putenv(TZ=GMT) but prior patched versions do not have this.
Is there some way of ensuring that the 2005-05-14 or later release is used
if on Windows only?

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Variable names with spaces in lm

2005-05-09 Thread Gabor Grothendieck
'lm' gives an error when using variables with spaces in them
in certain cases (but not all):

my.list - list(`a 1` = 1:4, `a 2` = 11:14)
lm(`a 1` ~ ., my.list) # Error : Object a.1 not found

# The following work ok so it does work in many cases:

lm(`a 1` ~ ., as.data.frame(my.list, optional = TRUE)) # ok
lm(`a 1` ~ `a 2`, my.list) # ok

Admittedly ?lm says that the data argument should be
a data frame or environment so this is not strictly a bug
but it seems to work with a list and the code even seems to 
coerce it to a data frame.

I am using Windows XP R version 2.1.0, 2005-04-18.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Variable names with spaces in lm

2005-05-09 Thread Gabor Grothendieck
There is one other problem with the coercion to data.frame.
terms.formula does not actually use the data contents, I think, so if it
did not coerce to a data frame it could handle unequal length
data in lists too.

On 09 May 2005 19:37:25 +0200, Peter Dalgaard [EMAIL PROTECTED] wrote:
 Gabor Grothendieck [EMAIL PROTECTED] writes:
 
  'lm' gives an error when using variables with spaces in them
  in certain cases (but not all):
 
my.list - list(`a 1` = 1:4, `a 2` = 11:14)
lm(`a 1` ~ ., my.list) # Error : Object a.1 not found
 
  # The following work ok so it does work in many cases:
 
lm(`a 1` ~ ., as.data.frame(my.list, optional = TRUE)) # ok
lm(`a 1` ~ `a 2`, my.list) # ok
 
  Admittedly ?lm says that the data argument should be
  a data frame or environment so this is not strictly a bug
  but it seems to work with a list and the code even seems to
  coerce it to a data frame.
 
  I am using Windows XP R version 2.1.0, 2005-04-18.
 
 As you probably gathered, the issue is that there's an as.data.frame()
 without optional=TRUE  sitting inside terms.formula which causes the
 . to expand to a.1 + a.2. Sounds like that's unconditionally a bug,
 but I'm a little unsure...
 
 --
   O__   Peter Dalgaard Blegdamsvej 3
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Light-weight data.frame class: was: how to add method to .Primitive function

2005-05-09 Thread Gabor Grothendieck
[.default is implemented in R as .subset.  See ?.subset and note that
it begins with a dot.  e.g. for the case where i and j are not missing:

[.lwdf - function(x, i, j) lapply(.subset(x,j), [, i)



On 5/8/05, Vadim Ogranovich [EMAIL PROTECTED] wrote:
 Hi,
 
 Encouraged by a tip from Simon Urbanek I tried to use the S3 machinery
 to write a faster version of the data.frame class.
 This quickly hits a snag: the [.default(x, i) for some reason cares
 about the dimensionality of x.
 In the end there is a full transcript of my R session. It includes the
 motivation for writing the class and the problems I have encountered.
 
 As a result I see three issues here:
 * why [.default(x, i) doesn't work if dim(x) is 2? After all a single
 subscript into a vector works regardless of whether it's a matrix or
 not. Is there an alternative way to access [.default?
 * why does unclass() make deep copy? This is a facet of the global
 over-conservatism of R with respect to copying.
 * is it possible to add some sort copy profiling to R? Something like
 copyProfiling(TRUE), which should cause R to log sizes of each copied
 object (just raw sizes w/o any attempt to identify the object). This
 feature should at least help assess the magnitude of the problem.
 
 Thanks,
 Vadim
 
 Now the transcript itself:
  # the motivation: subscription of a data.frame is *much* (almost 20
 times) slower than that of a list
  # compare
  n = 1e6
  i = seq(n)
 
  x = data.frame(a=seq(n), b=seq(n))
  system.time(x[i,], gcFirst=TRUE)
 [1] 1.01 0.14 1.14 0.00 0.00
 
  x = list(a=seq(n), b=seq(n))
  system.time(lapply(x, function(col) col[i]), gcFirst=TRUE)
 [1] 0.06 0.00 0.06 0.00 0.00
 
 
  # the solution: define methods for the light-weight data.frame class
  lwdf = function(...) structure(list(...), class = lwdf)
 
  # dim
  dim.lwdf = function(x) c(length(x[[1]]), length(x))
 
  # for pretty printing we define print.lwdf via a conversion to
 data.frame
  # as.data.frame.lwdf
  as.data.frame.lwdf = function(x) structure(unclass(x),
 class=data.frame, row.names=as.character(seq(nrow(x
 
  # print
  print.lwdf = function(x) print.data.frame(as.data.frame.lwdf(x))
 
  # now the real stuff
 
  # [
  # the naive [.lwdf = function (x, i, j) lapply(x[j], function(col)
 col[i])
  # won't work because evaluation of x[j] calls [.lwdf again and not
 [.default
  # so we switch by the number of arguments
  [.lwdf = function (x, i, j) {
 +   if (nargs() == 2)
 + NextMethod([, x, i)
 +   else
 + structure(lapply(x[j], function(col) col[i]),  class = lwdf)
 + }
 
  x = lwdf(a=seq(3), b=letters[seq(3)], c=as.factor(letters[seq(3)]))
  i = c(1,3); j = c(1,3)
 
  # unfortunately, for some reasons [.default cares about
 dimensionality of its argument
  x[i,j]
 Error in [.default(x, j) : incorrect number of dimensions
 
 
  # we could use unclass to get it right
  [.lwdf = function (x, i, j) {
 +   structure(lapply(unclass(x)[j], function(col) col[i]),  class =
 lwdf)
 + }
 
  x[i,j]
  a c
 1 1 a
 2 3 c
 
  # *but* unclass creates a deep copy of its argument as indirectly
 evidenced by the following timing
  x = lwdf(a=seq(1e6)); system.time(unclass(x))
 [1] 0.01 0.00 0.01 0.00 0.00
  x = lwdf(a=seq(1e8)); system.time(unclass(x))
 [1] 0.44 0.39 0.82 0.00 0.00
 
  version
 _
 platform x86_64-unknown-linux-gnu
 arch x86_64
 os   linux-gnu
 system   x86_64, linux-gnu
 status
 major2
 minor0.1
 year 2004
 month11
 day  15
 language R
 
 __
 R-devel@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-devel


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] How to understand packages, namespaces, environments

2005-05-08 Thread Gabor Grothendieck
On 5/8/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 [Alexander Stoddard]
  Subject: Re: [Rd] How to understand packages, namespaces, environments
 
  
   Does saying the following load package 'foo' into its own
  environment ?
 library(foo)
 
 [Duncan Murdoch]
  This loads some of the (name, object) pairs from the package into two
  environments:  the public one the user can see, and the
  namespace one that the code in the package can see.  They're
  related, you don't get two copies of the objects.
 
 That's interesting-- I thought there really were two copies. In my debug
 package, I've taken the approach of changing both copies.

How does one refer to the two different copies? Thanks.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] how to add method to .Primitive function

2005-05-07 Thread Gabor Grothendieck
On 5/7/05, Vadim Ogranovich [EMAIL PROTECTED] wrote:
 I tried to write the dim method for the list class, but R doesn't seem
 to dispatch to it:
  dim.list = function(x) c(length(x[[1]]), length(x))
  dim(list(1))
 NULL
  dim.list(list(1))
 [1] 1 1
 
 What is the correct way of registering dim.list with .Primitive(dim)?

The list method of dim is sealed 

 isSealedMethod(dim, list)
[1] TRUE

but you could define a subclass, mylist, and use that:

 setClass(mylist, representation(x = list), contains = list)
[1] mylist
 setMethod(dim, mylist, function(x) c(length([EMAIL PROTECTED]), 
 length([EMAIL PROTECTED])))
[1] dim
 x - new(mylist, x = list(x = 1, y = 2))
 dim(x)
[1] 1 2

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] NextMethod problem

2005-05-07 Thread Gabor Grothendieck
Can anyone shed any light on what is going wrong here?
Its based on simplifying some actual code that exists in
model.frame.default.  It looks like a bug to me.  Thanks.

data(iris)

f.default - function(x, subset, ...) {
  subset - eval(substitute(subset), iris, .GlobalEnv)
  subset
}

# This one is ok
f(1, 1:3)

# putting an intermediate f.numeric in results in body
# of subset function being returned
f.numeric - function(x, subset, ...) NextMethod(f)
f(1, 1:3)

# slightly different f.numeric results in error
f.numeric - function(x, ...) NextMethod(f)
f(1, 1:3)

R.version.string # Windows XP

---

Output:


 f.default - function(x, subset, ...) {
+   subset - eval(substitute(subset), iris, .GlobalEnv)
+   subset
+ }
 
 # This one is ok
 f(1, 1:3)
[1] 1 2 3
 
 
 # putting an intermediate f.numeric in results in body
 # of subset function being returned
 f.numeric - function(x, subset, ...) NextMethod(f)
 f(1, 1:3)
function (x, ...) 
UseMethod(subset)
environment: namespace:base
 
 # slightly different f.numeric results in error
 f.numeric - function(x, ...) NextMethod(f)
 f(1, 1:3)
Error in eval(expr, envir, enclos) : ..1 used in an incorrect context,
no ... to look in
 
 R.version.string # Windows XP
[1] R version 2.1.0, 2005-04-18

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] how to add method to .Primitive function

2005-05-07 Thread Gabor Grothendieck
Actually I think Simon's approach is a good one.  Maybe Simon's
comment was really intended to say that its dangerous
to redefine list operations but it should not be dangerous
to define a subclass of list and override operations that
way.  Also since the whole purpose of this is for speed I think
S3 is a better fit since from what I have seen S3 is faster
than S4.

On 5/7/05, Vadim Ogranovich [EMAIL PROTECTED] wrote:
 Thank you for the tip! I'll see if I can take it on to [-.mylist
 
 Why do you think it will be hard to replace data frames? Insightful
 recently introduced a new class, largeDataFrame or something like this.
 This new class looks and feels like a data.frame, but they made two key
 simplifications: all columns are atomic vectors and there is no
 rownames. Maintaining the latter is a big overhead in many data.frame
 operations, consider for example rbind() which needs to ensure uniquness
 of the row names.
 
 I'd really like to hear why you think it would be a bad idea to have
 such a class.
 
 Thanks,
 Vadim
 
  -Original Message-
  From: Simon Urbanek [mailto:[EMAIL PROTECTED]
  Sent: Saturday, May 07, 2005 12:34 PM
  To: Vadim Ogranovich
  Cc: Gabor Grothendieck; r-devel@stat.math.ethz.ch
  Subject: Re: [Rd] how to add method to .Primitive function
 
  On May 7, 2005, at 2:59 PM, Vadim Ogranovich wrote:
 
 
   But then mylist is not a list:
  
  
   x - new(mylist, x = list(x = 1, y = 2)) x[[1]]
  
  
   Error in x[[1]] : subscript out of bounds
  
   This is probably solvable by a sprinkle of setIs or setAs
  spells, but
   each time I get into the S4 area I feel walking a mine-field.
  
 
  Well, then you can still use S3:
  x - list(x=1:5, y=5:1)
  class(x) - mylist
  dim.mylist - function(l) c(length(l[[1]]),length(l))
  dim(x)
  [1] 5 2
  x[[1]]
  [1] 1 2 3 4 5
  is.list(x)
  [1] TRUE
 
  I'm not saying it's a good idea, because you can still break other
  things if you're not careful, but it's possible... If all you
  want is
  writing convenience functions for lists, that's fine, but I don't
  think you can replace data frames with such objects easily ...
 
  Cheers,
  Simon
 
 
 


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Re: NextMethod problem

2005-05-07 Thread Gabor Grothendieck
Its probably obvious but just for completeness, I missed the
generic definition in pasting this into my post so I have
added it below:

On 5/7/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 Can anyone shed any light on what is going wrong here?
 Its based on simplifying some actual code that exists in
 model.frame.default.  It looks like a bug to me.  Thanks.
 

function(x, ...) UseMethod(f)

 data(iris)
 
 f.default - function(x, subset, ...) {
  subset - eval(substitute(subset), iris, .GlobalEnv)
  subset
 }
 
 # This one is ok
 f(1, 1:3)
 
 # putting an intermediate f.numeric in results in body
 # of subset function being returned
 f.numeric - function(x, subset, ...) NextMethod(f)
 f(1, 1:3)
 
 # slightly different f.numeric results in error
 f.numeric - function(x, ...) NextMethod(f)
 f(1, 1:3)
 
 R.version.string # Windows XP
 
 ---
 
 Output:

 function(x, ...) UseMethod(f)
 data(iris)
 
  f.default - function(x, subset, ...) {
 +   subset - eval(substitute(subset), iris, .GlobalEnv)
 +   subset
 + }
 
  # This one is ok
  f(1, 1:3)
 [1] 1 2 3
 
 
  # putting an intermediate f.numeric in results in body
  # of subset function being returned
  f.numeric - function(x, subset, ...) NextMethod(f)
  f(1, 1:3)
 function (x, ...)
 UseMethod(subset)
 environment: namespace:base
 
  # slightly different f.numeric results in error
  f.numeric - function(x, ...) NextMethod(f)
  f(1, 1:3)
 Error in eval(expr, envir, enclos) : ..1 used in an incorrect context,
 no ... to look in
 
  R.version.string # Windows XP
 [1] R version 2.1.0, 2005-04-18


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Varying as.Date performance

2005-05-04 Thread Gabor Grothendieck
On 5/5/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 On 5/4/05, Jeff Enos [EMAIL PROTECTED] wrote:
  R-devel,
 
  The performance of as.Date differs by a large degree between one of my
  machines with glibc 2.3.2:
 
   system.time(x - as.Date(rep(01-01-2005, 10), format = %m-%d-%Y))
  [1] 1.17 0.00 1.18 0.00 0.00
 
  and a comparable machine with glibc 2.3.3:
 
   system.time(x - as.Date(rep(01-01-2005, 10), format = %m-%d-%Y))
  [1] 31.20 46.89 81.01  0.00  0.00
 
  both with the same R version:
 
   R.version
  _
  platform i686-pc-linux-gnu
  arch i686
  os   linux-gnu
  system   i686, linux-gnu
  status
  major2
  minor1.0
  year 2005
  month04
  day  18
  language R
 
  I'm focusing on differences in glibc versions because of as.Date's use
  of strptime.
 
  Does it seem likely that the cause of this discrepancy is in fact
  glibc?  If so, can anyone tell me how to make the performance of the
  second machine more like the first?
 
  I have verified that using the chron package, which I don't believe
  uses strptime, for the above character conversion performs equally
  well on both machines.
 
 I think its likely the character processing that is the bottleneck.  You
 can speed that part up by extracting the substrings directly:
 
  system.time({
 + dd - rep(01-01-2005, 1)
 + year - as.numeric(substr(dd, 7, 10))
 + mon - as.numeric(substr(dd, 1, 2))
 + day - as.numeric(substr(dd, 4, 5))
 + x - as.Date(ISOdate(year, mon, day))
 + }, gc = TRUE)
 [1] 0.42 0.00 0.51   NA   NA
 
  system.time(x - as.Date(rep(01-01-2005, 10), format = %m-%d-%Y), 
  gc=TRUE)
 [1] 1.08 0.00 1.22   NA   NA
 

Sorry but I got the number of zeros in the reps wrong.   Its actually slower.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Suggestions for manipulating formula objects

2005-04-21 Thread Gabor Grothendieck
On 4/20/05, Thomas Lumley [EMAIL PROTECTED] wrote:
 On Wed, 20 Apr 2005 [EMAIL PROTECTED] wrote:
 
  I'm trying to manipulate/change a formula prior to passing it to another
  function.  A simplified example:
 
  User passes formula to my function: y~x
  My function does: lm(transform(y)~x)
 
  Here, transform() is added to the model's response.
 
  What is the best way to accomplish this?
 
 One way is
 
 formula[[2]]-substitute(transform(y),list(y=formula[[2]]))
 

Another way is to use update, e.g.

 fo - y ~ x
 update(fo, transform(.) ~ .)
transform(y) ~ x

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Overloading methods in R

2005-04-21 Thread Gabor Grothendieck
On 4/21/05, Ali - [EMAIL PROTECTED] wrote:
 
 
 - How to overload methods in classes created by R.oo package?
 [snip]
 
 Maybe you missed it in the flurry of messages, but did the idea suggested
 by Gabor Grothendick not suit your needs?
 
 I had to abstract the question in a setntence this time to prevent it to be
 missed again 'in the flurry of messages'.
 
 Gabor Grothendick's example does work for S3 and your own example does work
 for S4 but none of them answer the problem I declared a few times. Both
 examples work fine for an 'specific' case. I am looking for a general
 solution to use it with a parser as part of an automatic  wrapper.
 
 When wrapping some arbitrary C++ classes, we don't know
 
 - how many functions are overloaded in a class
 - how many arguments each function has
 - what class is each argument of each function
 
 Implementing the wrapper in the way that you and Gabor Grothendick suggest
 requires a lot of 'if' and 'missing' making it too elaborated.



Here is another example which hopefully shows that
there is no limitation on number of arguments,
number of methods and classes.Here we have
methods with one, two and three arguments and a
variety of classes.

Note that this and the previous examples are not
intended to be complete finished solutions to your
problem but are only illustrative to give you the
idea how to proceed.  Generalizing them should not
be too complex nor inelegant.  Dealing with
vectors of arguments is not that much harder than
dealing with individual arguments in a vector
oriented language like R.


f - function(...) UseMethod(f, NULL)

f.NULL - function(...) {
args - list(...)
classes - sapply(args, class)
.Class - paste(classes, collapse = .)
NextMethod(f, ...)
}

f.numeric - function(...) 2 * ..1 
f.numeric.numeric - function(...) ..1 + ..2
f.character.numeric.Date - function(...) {
   args - list(...)
   paste(args[[1]], args[[2]], format(args[[3]], %Y-%m-%d))
}
f.default - function(...) print(list(...))


f(1)   # 2
f(1,2) # 3
f(a, 23, Sys.Date()) # a 23 2005-04-21
f()# list()

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Overloading methods in R

2005-04-21 Thread Gabor Grothendieck
On 4/21/05, Ali - [EMAIL PROTECTED] wrote: 
 
 
 
 
 f - function(...) UseMethod(f, NULL)
 
 f.NULL - function(...) {
  args - list(...)
  classes - sapply(args, class)
  .Class - paste(classes, collapse = .)
  NextMethod(f, ...)
 }
 
 f.numeric - function(...) 2 * ..1
 f.numeric.numeric - function(...) ..1 + ..2
 f.character.numeric.Date - function(...) {
  args - list(...)
  paste(args[[1]], args[[2]], format(args[[3]], %Y-%m-%d))
 }
 f.default - function(...) print(list(...))
 
 
 f(1) # 2
 f(1,2) # 3
 f(a, 23, Sys.Date()) # a 23 2005-04-21
 f() # list()
 
 Thanks Gabor! This answers a big part of my question. I am just curious 
 why
 something like this doesn't work in S4:
 
 -
 setGeneric(foo, function(object, ...) standardGeneric(foo))
 
 foo.NULL - function(object, ...) {
 args - list(...)
 classes - sapply(args, class)
 .Class - paste(classes, collapse = .)
 }
 
 foo.default - function(object, ...) paste(wrong args!)
 
 foo.numeric - function(object, ...) 2 * ..1
 foo.numeric.numeric - function(object, ...) ..1 + ..2
 --
 
 
 On 4/21/05, Ali - [EMAIL PROTECTED] wrote: 
  
  
  
  
  f - function(...) UseMethod(f, NULL)
  
  f.NULL - function(...) {
   args - list(...)
   classes - sapply(args, class)
   .Class - paste(classes, collapse = .)
   NextMethod(f, ...)
  }
  
  f.numeric - function(...) 2 * ..1
  f.numeric.numeric - function(...) ..1 + ..2
  f.character.numeric.Date - function(...) {
   args - list(...)
   paste(args[[1]], args[[2]], format(args[[3]], %Y-%m-%d))
  }
  f.default - function(...) print(list(...))
  
  
  f(1) # 2
  f(1,2) # 3
  f(a, 23, Sys.Date()) # a 23 2005-04-21
  f() # list()
  
  Thanks Gabor! This answers a big part of my question. I am just curious 
  why
  something like this doesn't work in S4:
  
  -
  setGeneric(foo, function(object, ...) standardGeneric(foo))
  
  foo.NULL - function(object, ...) {
  args - list(...)
  classes - sapply(args, class)
  .Class - paste(classes, collapse = .)
  }
  
  foo.default - function(object, ...) paste(wrong args!)
  
  foo.numeric - function(object, ...) 2 * ..1
  foo.numeric.numeric - function(object, ...) ..1 + ..2
  --
 
  I am not 100% sure I understand what the question is but I think what
 you are looking for is the fact that ... cannot be part of the signature 
 in
 S4. That is ... can be among the method arguments but you can't 
 dispatch on those arguments in S4.


[[alternative HTML version deleted]]

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Overloading methods in R

2005-04-20 Thread Gabor Grothendieck
On 4/20/05, Ali - [EMAIL PROTECTED] wrote:
 (1) It seems to me that, generally, in R it is not possible to overload
 functions. Is that right?
 
 (2) Assuming that the above is true, or partially true, is there any extra
 packages to handle overloading in R?
 
 (3) Assuming (1) is TRUE and (2) is FALSE, can anyone provide some advice on
 developing some function that understand what the arguments are and then
 calls the right overloaded function?
 
 It would be something like this:
 
 overloadedFunction1 - function(x) {};
 
 overloadedFunction2 - function(x, y) {};
 
 theFunction - function(...)
 {
   # How to identify ... and call the right overloaded function?
 }

Here is an example using S3:

 
 f - function(x, y) UseMethod(f)
 
 f.default - function(x,y,z) {
+ if (missing(z)) {
+ class.x - if (missing(x)) missing else class(x)
+ class.y - if (missing(y)) missing else class(y)
+ .Class - paste(class.x, class.y, sep = .)
+ NextMethod(f, z = 1)
+ } else # real default method
+ if (!missing(x)  !missing(y)) paste(x,y) else one missing
+ }
 
 f.missing.missing - function(x, y, z) both Missing
 f.numeric.numeric - function(x,y, z) paste(x, y)
 
 f()
[1] both Missing
 f(1)
[1] one missing
 f(y=1)
[1] one missing
 f(1,1)
[1] 1 1

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RFC: hexadecimal constants and decimal points

2005-04-17 Thread Gabor Grothendieck
On 4/17/05, Prof Brian Ripley [EMAIL PROTECTED] wrote:
 These are some points stimulated by reading about C history (and
 related in their implementation).
 
 1) On some platforms
 
  as.integer(0xA)
 [1] 10
 
 but not all (not on Solaris nor Windows).  We do not define what is
 allowed, and rely on the OS's implementation of strtod (yes, not strtol).
 It seems that glibc does allow hex: C99 mandates it but C89 seems not to
 allow it.
 
 I think that was a mistake, and strtol should have been used.  Then C89
 does mandate the handling of hex constants and also octal ones.  So
 changing to strtol would change the meaning of as.integer(011).

In the windows batch language the following (translated to R):
   month - substr(20050817,5,2)
must be further processed to removed any leading zero.  Mostly
people don't even realize this and just wind up writing erroneous
programs.  Its actually a big nuisance IMHO.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] documentation for 'open': some clarification?

2005-04-14 Thread Gabor Grothendieck
Here is an example:

# read and print first 10 lines one by one
# the next two lines could be collapsed into con - file(myfile, r)
con - file(myfile) 
open(con) 
for(i in 1:10) print(readLines(con, n=1))
close(con) 

Also its possible that you may not need open.  For example,
one can just read it all in at once like this:

mylines - readLines(myfile)
# and now mylines[1] is the first line, etc.  

# or
my.numbers - scan(myfile)

# or
my.table - read.table(myfile)



On 4/14/05, Na Li [EMAIL PROTECTED] wrote:
 
 I'm been doing more and more of file text parsing inside R instead of
 coping with Perl.  For that, I need open a file and read it line-by-line.
 I found the documentation for 'open' isn't very clear.
 
 Right now it has
 
 ,[ *help(R)[open]* ]
 |  'open' opens a connection.  In general functions using connections
 |  will open them if they are not open, but then close them again, so
 |  to leave a connection open call 'open' explicitly.
 `
 
 It seems that one has to call 'open' to open a live connection that can be
 parsed line by line (using readLines() or scan() to read).  But open()
 cannot be directly applied to a file name.
 
 In fact, one only needs supply the 'open' argument in file() with
 something other than the empty string,
 
 ,
 |  con - file (S200.dat)
 |  isOpen (con)
 | [1] FALSE
 |  con - file (S200.dat, open = r)
 |  isOpen (con)
 | [1] TRUE
 `
 
 It is not clear to me how 'open()' is supposed to be used.  It took me a
 while to figure this out and I thought it might be worthwhile to add one
 sentence or two to make it more clear in the doc.
 
 Cheers,
 
 Michael
 
 --
 Na (Michael) Li, Ph.D.
 Division of Biostatistics  A443 Mayo Building, MMC 303
 School of Public Health420 Delaware St SE
 University of MinnesotaMinneapolis, MN 55455
 Phone: (612) 626-4765  Email: [EMAIL PROTECTED]
 Fax:   (612) 626-0660  http://www.biostat.umn.edu/~nali
 
 GPG Public Key:
 http://www.umn.edu/lookup?SET_INSTITUTION=UMNTCUID=nalishow_pgp=1
 
 __
 R-devel@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-devel


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] formulas and frames

2005-04-02 Thread Gabor Grothendieck
On Apr 2, 2005 3:35 PM, Charles Geyer [EMAIL PROTECTED] wrote:
 On Fri, Apr 01, 2005 at 05:04:23PM +0200, Dimitris Rizopoulos wrote:
  if I understand well you want something like this:
 
  y - rnorm(100)
  p - rnorm(100)
  x1 - rnorm(100)
  x2 - rnorm(100)
  x3 - rnorm(100)
  nams - c(y, p, paste(x, 1:3, sep=))
  ##
  dat - as.data.frame(lapply(nams, get))
  names(dat) - nams
  dat
 
  I hope it helps.
 
 Yes, that's it.  Thanks.

With that, maybe you could use a notation like this
where the predecessor variables are after the bar:

fo - cbind(a, b) ~ c + I(c^2)*f | d + e

all.vars(fo) # all variables 
all.vars(fo[[2]]) # response variables
all.vars(fo[[3]][[2]]) # explanatory variables
all.vars(fo[[3]][[3]]) # predecessor variables

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] formulas and frames

2005-04-01 Thread Gabor Grothendieck
Try this:

my.df - data.frame(a=1:10, b=11:20, c=21:30, d=31:40)
 model.response(model.frame(cbind(a,b) ~ c+d, my.df))
a  b
1   1 11
2   2 12
3   3 13
4   4 14
5   5 15
6   6 16
7   7 17
8   8 18
9   9 19
10 10 20


On Apr 1, 2005 2:31 AM, Charles Geyer [EMAIL PROTECTED] wrote:
 I have a design problem.  I'm trying to design a package that does
 something like glm, but
 
  1. the response is multivariate, so I can't be just like glm
 and get the response variables out of the formula.  I decided
 (perhaps incorrectly) to just supply the response variable
 names as an argument response.
 
  2. I have the usual predictor variables.
 
  3. I discovered the reshape function, which will allow me to
 string out the data frame in long form with the response
 strung out in a single vector.  So I can get a model matrix
 for the right hand side of the formula, which can also include
 what reshape calls the time variable (although it isn't time).
 
 so far so good, but
 
  4. Each response variable is conditioned on a predecessor variable.
 
 so we come to my question.  How do I force a bunch of variables into
 a data frame?  I need all of the response and predecessor variables,
 which are at this point specified only by a character vector containing
 their names (or something else???) and I also need all of the predictor
 variables.  If the user has supplied a data frame containing all that
 stuff fine!  But if it's just all over the place, some in the data
 argument and some in the R global environment (or elsewhere???).
 Maybe I'm just ignorant, but I haven't stumbled across a function that
 just stuffs all that stuff into a data frame (model.frame would do it
 if I didn't have this extra stuff).
 
 Any help?  Or do I just have to kludge this?
 
 --
 Charles Geyer
 Professor, School of Statistics
 University of Minnesota
 [EMAIL PROTECTED]
 
 __
 R-devel@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-devel


__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] An exercise in the use of 'substitute'

2005-04-01 Thread Gabor Grothendieck
On Apr 1, 2005 7:12 PM, Douglas Bates [EMAIL PROTECTED] wrote:
 I would like to create a method for the generic function with applied
 to a class of fitted models.  The method should do two things:
 
 1. Substitute the name of the first argument for '.' throughout the
 expression
 
 2. Evaluate the modified expression using the data argument to the
 fitted model as the first element of the search list.
 
 The second part is relatively easy.  The default method for with has body
   eval(substitute(expr), data, enclos = parent.frame())
 and you just change this to
   eval(substitute(expr), eval(data$call$data), enclos = parent.frame())
 
 So, for example
 
  fm - lm(optden ~ carb, Formaldehyde)
  with.lm - function(data, expr, ...) eval(substitute(expr),
 eval(data$call$data), enclos = parent.frame())
  with(fm, carb)
 [1] 0.1 0.3 0.5 0.6 0.7 0.9
 
 However, I haven't been able to work out a clever way of using
 substitute to get the first part.  I would like to be able to call, e.g.
 
 with(fm, xyplot(resid(.) ~ carb))
 
 and get a plot of resid(fm) ~ Formaldehyde$carb
 
 It is possible to do the first part by deparsing, substituting, and
 parsing but that's inelegant.  Can anyone suggest a more elegant method?
 
 BTW, the example of an lm model is just for illustration.  The actual
 use I have in mind is for lme (now lmer) models.  The plot method for
 the lme class in the nlme package does something very similar to this.

This seems to work, at least on your examples:

 with.lm - function(data, expr, ...) eval(substitute(expr), 
+  append(model.frame(data), list(. = data)), parent.frame())
 
 library(lattice)
 data(Formaldehyde)
 fm - lm(optden ~ carb, Formaldehyde)
 with(fm, carb)
[1] 0.1 0.3 0.5 0.6 0.7 0.9
 with(fm, xyplot(resid(.) ~ carb))

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] na.omit error message

2005-03-25 Thread Gabor Grothendieck

If I create a bivariate ts series with 3 time points and run
na.omit on it expecting it to reduce it to a one point ts series
instead I get the following error messasge:


R hh - ts(matrix(c(NA,1:4,NA),3), start = c(1951,8), freq=12)
R hh
 Series 1 Series 2
Aug 1951   NA3
Sep 195114
Oct 19512   NA
R na.omit(hh)
Error in tsp-(`*tmp*`, value = c(1951.667, 1951.667,  : 
invalid time series parameters specified
 

If I replace na.omit with na.contiguous then I get a protection stack
overflow:

R na.contiguous(hh)
Error: protect(): protection stack overflow


R R.version.string # Windows XP
[1] R version 2.1.0, 2005-03-23


Any comments on what is wrong?

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] rw2010alpha

2005-03-24 Thread Gabor Grothendieck

Until recently R 2.1.0 was called rw2010dev.  I just visited

   http://cran.r-project.org/bin/windows/base/rdevel.html

and noticed its now called rw2010alpha, not rw2010dev.  I would like 
to use it without downloading all the libraries over again.

Can I 

- just rename my rw2010dev folder (Windows XP) to 
  rw2010alpha and then install rw2010alpha to that?  or 

- install it in a separate rw2010alpha folder, distinct 
  from the rw2010dev folder and then somehow copy the libraries 
  from one to the other? or 

- do I have to download the libraries all over again? or 

- some other solution?

Thanks.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] sub('^', .....) bugs (PR#7742)

2005-03-23 Thread Gabor Grothendieck
 maechler at stat.math.ethz.ch writes:

: 
:  David == David Forrest drf5n at maplepark.com
:  on Tue, 22 Mar 2005 15:02:20 -0600 (CST) writes:
: 
: David According to help(sub), the ^ should match the
: David zero-length string at the beginning of a string:
: 
: yes, indeed.
: 
: David sub('^','var',1:3) # 1 2 3
: David sub('$','var',1:3) # 1var 2var 3var
: 
: David # This generates what I expected from the first case:
: David sub('^.','var',11:13)  # var1 var2 var3
: 
: there are even more fishy things here:
: 
: 1) In your cases, the integer 'x' argument is auto-coerced to
:character, however that fails as soon as  'perl = TRUE' is used.
: 
:   sub('^','v_', 1:3, perl=TRUE)
:  Error in sub.perl(pattern, replacement, x, ignore.case) : 
:invalid argument
: 
:  {one can argue that this is not a bug, since the help file asks
:   for 'x' to be a character vector; OTOH, we have
:   as.character(.) magic in many other places, i.e. quite
:   naturally here;  
:   at least  perl=TRUE and perl=FALSE should behave consistently.}
: 
: 2) The 'perl=TRUE' case behaves even more problematically here:
: 
:sub('^','v_', LETTERS[1:3], perl=TRUE)
:   [1] A\0e B\0J C\0S
:sub('^','v_', LETTERS[1:3], perl=TRUE)
:   [1] A\0J B\0P C\0J
:sub('^','v_', LETTERS[1:3], perl=TRUE)
:   [1] A\0\0 B\0\0 C\0m 
:   
: 
:  i.e., the result is random nonsense.
: 
: Note that this happens both for R-patched (2.0.1)  and R-devel (2.1.0 alpha).
: 
: == forwarded as bug report to R-bugs

Also consider the following which may be related.  #1 does not
place an X before the first word and #2 causes R to hang.

R R.version.string # Windows XP
[1] R version 2.1.0, 2005-03-17

R gsub(\\b, X, The quick brown fox) # 1
[1] The Xquick Xbrown Xfox

R gsub(\\b, X, The quick brown fox, perl = TRUE) # 2
... hangs ...

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Lemon drops

2005-03-15 Thread Gabor Grothendieck
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:

: 
: I bumped into the following situation:
: 
: Browse[1] coef
: deg0NA deg4NA deg8NA deg0NP deg4NP deg8NP
: (Intercept)462510528492660762
: Browse[1] coef[,1]
: [1] 462
: Browse[1] coef[,1,drop=F]
: deg0NA
: (Intercept)462
: 
: where I really wanted neither, but
: 
: (Intercept)
: 462
: 
: Anyone happen to know a neat way out of the conundrum?
: 
: I can think of
: 
: rowSums(coef[,1,drop=F])
: 
: or of course
: 
: val - coef[,1]
: names(val) - rownames(x))
: 
: but the first one is sneaky and the second gets a bit tedious...
: 


If by tedious you mean its not a single expression
then the basic idea of your solution #2 can be preserved
while doing it in a single expression like this:

structure(coef[,1], .Names = rownames(x))

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Do environments make copies?

2005-02-26 Thread Gabor Grothendieck

See ?gctorture

Nawaaz Ahmed nawaaz at inktomi.com writes:

: 
: Hi Folks,
: Thanks for all your replies and input. In particular, thanks Luke, for 
: explaining what is happening under the covers. In retrospect, my example 
:   using save and load to demonstrate the problem I was having was a 
: mistake - I was trying to reproduce the problem I was having in a simple 
: enough way and I thought save and load were showing the same problem 
: (i.e. an extra copy was being made). After carefully examining my gc() 
: traces,
: I've come to realize that while there are copies being made, there is 
: nothing unexpected about it - the failure to allocate memory is really 
: because R is hitting the 3GB address limit imposed by my linux box 
: during processing. So as Luke suggests, maybe 32 bits is not the right 
: platform for handling large data in R.
: 
: On the other hand, I think the problem can be somewhat alleviated 
: (though not eliminated) if we did garbage collection of temporary 
: variables immediately so that we can reduce the memory footprint and the 
: fragmentation problem that malloc() is going to be faced with 
: (gctorture() is probably too extreme . Most of the problems that I am 
: having  are in the coercion routines which do create temporary copies. 
: So in code of the form x = as.vector(x), it would be nice if the old 
: value of x was garbage collected (i.e. if there were no references to it)
: 
: nawaaz
: 
: Luke Tierney wrote:
:  On Thu, 24 Feb 2005, Berton Gunter wrote:
:  
:  I was hoping that one of the R gurus would reply to this, but as they 
:  have't
:  (thus far) I'll try. Caveat emptor!
: 
:  First of all, R passes function arguments by values, so as soon as you 
:  call
:  foo(val) you are already making (at least) one other copy of val for the
:  call.
:  
:  
:  Conceptually you have a copy, but internally R trieas to use a
:  copy-on-modify strategy to avaoid copying unless necessary.  THere are
:  conservative approximations involved, so there is more copying than
:  one might like but definitely not as much as this.
:  
:  
:  Second,you seem to implicitly make the assumption that assign(..., env=)
:  uses a pointer to point to the values in the environment. I do not 
:  know how
:  R handles environments and assignments like this internally, but your 
:  data
:  seems to indicate that it copies the value and does not merely point 
:  to it
:  (this is where R Core folks can shed more authoritative light).
:  
:  
:  This assignment does just store the pointer.
:  
:  Finally, it makes perfect sense to me that, as a data structure, the
:  environment itself may be small even if it effectively points to (one of
:  several copies of) large objects, so that object.size(an.environment) 
:  could
:  be small although the environment may contain huge arguments. Again, 
:  the
:  details depend on the precise implementation and need clarification by
:  someone who actually knows what's going on here, which ain't me.
: 
:  I think the important message is that you shouldn't treat R as C, and you
:  shouldn't try to circumvent R's internal data structures and 
:  conventions. R
:  is a language designed to implements Chambers's S model of 
:  Programming with
:  Data. Instead of trying to fool R to handle large data sets, maybe you
:  should consider whether you really **need** all the data in R at one time
:  and if sensible partitioning or sampling to analyze only a portion or
:  portions of the data might not be a more effective strategy.
:  
:  
:  R can do quite a reasonable job with large data sets on a resonable
:  platform.  A 32 bit platform is not a reasonable one on which to use R
:  with 800 MB chunks of data. Automatic memory management combined with
:  the immutable vector semantics require more elbow room than that.  If
:  you really must use data of this size on a 32-bit platform you will
:  probably be muchhappier using a limited amoutn of C code and external
:  pointers.
:  
:  As to what is happening in this example: look at the default parent
:  used by new.env and combine that with the fact that the serialization
:  code does not preserve sharing of atomic objects.  The two references
:  to the large object are shared in the original session but lead to two
:  large objects in the saved image and the load.  Using
:  
:  ref - list(env = new.env(parent = .GlobalEnv))
:  
:  in new.ref avoids the second copy both in the saved image and after
:  loading.
:  
:  luke
:  
: 
:  -Original Message-
:  From: r-help-bounces at stat.math.ethz.ch
:  [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Nawaaz Ahmed
:  Sent: Thursday, February 24, 2005 10:36 AM
:  To: r-help at stat.math.ethz.ch
:  Subject: [R] Do environments make copies?
: 
:  I am using environments to avoid making copies (by keeping
:  references).
:  But it seems like there is a hidden copy going on somewhere - for
:  example in the code fragment below, I am creating a reference 

Re: [Rd] matrix operations

2005-02-20 Thread Gabor Grothendieck
Melanie Vida mvida at mac.com writes:

: 
: In R, I'm imported a data frame of  2,321,123 by 4 called dataF.
: I converted the data frame dataF to a matrix
: 
: dataM - as.matrix(dataF)
: 
: Does R have an efficient routine to treat the special elements that 
: contain inf in them. For example, can you separate the rows that have 
: inf elements from the matrix into a separate matrix without iterating 
: over the entire matrix?

This will eliminate all rows that contain Inf or -Inf.

m[apply(is.finite(m), 1, all),]

: 
: Also, does R have an efficient way to sort columns in a matrix?

This can be interpreted a number of different ways:

See
?order
?sort

m[order(m[,1]),] # sorts so that column 1 is sorted and rows stay together
apply(m, 2, sort) # sort each column separately

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Test Tools

2005-02-14 Thread Gabor Grothendieck
Paul Roebuck roebuck at odin.mdacc.tmc.edu writes:

 
 Anyone aware of tools available that can provide complexity
 metrics and/or code coverage analysis for R source?
 

I am not aware of anything but there are some code tools in 
the codetools package.  Do a google search for:
   codetools Luke

Note that most software complexity metrics are highly correlated
so there may be little to gain beyond using lines of code.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Vignette without latex/sweave

2005-01-26 Thread Gabor Grothendieck

Is it possible to create a package vignette without latex/sweave?  I gather
that the %\VignetteIndexEntry and other \%Vignette... lines in the .Rnw 
file provide the various vignette metadata but was just wondering 
if latex/sweave are really hardwired into this or if its possible to 
produce a .pdf using other software and have it included as a vignette.  
If so, how does one specify the metadata?

I don't actually have a need for this but was just wondering.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Any plans for commenting out region via something like /* */ ?

2005-01-07 Thread Gabor Grothendieck
A.J. Rossini blindglobe at gmail.com writes:

: 
: Greetings from Switzerland!
: 
: Are there any plans/initiatives/considerations in future versions of R
: for commenting out regions via something like  /**/  ?


You could use multiline character strings.  Just need to watch
out for embedded quotes.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Windows vignettes, shQuote, texi2dvi

2005-01-03 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

: 
: Further searching also found it at:
: 
: http://www.ctan.org/tex-archive/graphics/texdraw/manual/texi2dvi
: 
: but its the UNIX shell script.  I looked at it quickly and it
: probably would not be that hard to translate it into R (about half
: of it is just argument processing) replacing
: the existing texi2dvi R command with the equivalent of that
: script.   It could be added to the wishlist.
: 
: A kludge might be to run that shell script on windows under cygwin
: although that might still not fix the problem of building vignettes 
: from the R CMD build tool.
: 
: Personally, I am just going to stick with MiKTeX's texify.exe and build
: vignettes from the .tex file manually for now.

Just an update.  I just installed the January 2, 2005 r-devel build and
I am now able to automatically generate vignettes on Windows.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Windows vignettes, shQuote, texi2dvi

2004-12-31 Thread Gabor Grothendieck
Further searching also found it at:

http://www.ctan.org/tex-archive/graphics/texdraw/manual/texi2dvi

but its the UNIX shell script.  I looked at it quickly and it
probably would not be that hard to translate it into R (about half
of it is just argument processing) replacing
the existing texi2dvi R command with the equivalent of that
script.   It could be added to the wishlist.

A kludge might be to run that shell script on windows under cygwin
although that might still not fix the problem of building vignettes 
from the R CMD build tool.

Personally, I am just going to stick with MiKTeX's texify.exe and build
vignettes from the .tex file manually for now.

Gabor Grothendieck ggrothendieck at myway.com writes:

: 
: That's interesting but its not in my tree.  I downloaded it
: off the net (all 500MB+ !).
: 
: Liaw, Andy andy_liaw at merck.com writes:
: 
: : 
: : I don't know the issue, but a search for texi2dvi under my TeXLive tree
: : found a shell script by that name in the directory
: : texmf\doc\graphics\texdraw.  Can that be made to work, in conjunction with
: : the shell in the Rtool bundle?  Just curious.  (This is fpTeX installed 
from
: : the TeXLive 2004 CD.)
: : 
: : Andy
: : 
: :  From: Gabor Grothendieck
: :  
: :  From:   Uwe Ligges ligges at statistik.uni-dortmund.de
: :   Gabor Grothendieck wrote:
: :   
: :I noticed a shQuote fix for Windows in the svn logs.
: :Just wanted to point out that this will favorably 
: :affect texi2dvi on Windows which previously used
: :UNIX quoting and so generated an incorrect Windows
: :command. (Note that texi2dvi is used when creating
: :vignettes.)
: :
: :Another problem is that the recommended tex
: :distribution for Windows, fptex, does not have texi2dvi
: :in the first place. The alternative, MiKTeX, is harder
: :to install
: :   
: :   I heared some people think so, but apparently it is not a 
: :  big problem.
: :  
: :  I installed both so I know from experience that MikTex was 
: :  harder to install.   
: :  
: :  The problem is that its a lot of work to set up a package
: :  building environment on Windows. There are numerous systems to 
: :  install, TeX, UNIX tools, perl, Microsoft Help plus path problems.   
: :  
: :  MikTex is just one
: :  more obstacle.  MikTex seems like its a slicker distribution
: :  package so its too bad its more work to set up.  The main
: :  problems with fptex are not configuration but that (1) the
: :  package is so large that it takes a long time to download and
: :  (2) its missing crucial elements, viz. texi2dvi.
: :  
: :   
: :   
: :than fptex but instructions are available
: :and it does have an equivalent to texi2dvi called texify.exe;
: :however, R still does not know about texify.exe 
: :resulting in additional installation hassles,
: :viz. setting the texi2dvi option or setting up a texi2dvi.bat
: :file yourself that calls texify.exe or just forgetting
: :about texi2dvi and manually running the necessary tex commands
: :when creating vignettes on Windows.
: :   
: :   texi2dvi is also available in MikTeX, see package 
: :  miktex-texinfo-bin.
: :   
: :   See also ?texi2dvi in R to see how you can set something 
: :  different, e.g. 
: :   texify, which is explicitly given in the examples.
: :   
: :   
: :   
: :There are also additional problems related to using the
: :perl package-building scripts in Windows. I can create the
: :zoo vignette using:
: :
: :setwd(...whatever...)
: :Sweave(zoo.Rnw)
: :texi2dvi(zoo.tex)
: :
: :provided I modify the source to texi2dvi to use the
: :appropriate quoting but even with that modification,
: :the perl package build scripts only build
: :the vignette as far as the zoo.tex file and I must
: :do the rest by hand so there is some problem with
: :the scripts or in how they interact with R.
: :   
: :   Are you going to provide a patch?
: :  
: :  I assume the shQuote change obviates the need for my kludge to
: :  texi2dvi.   If it were not written in perl I would have looked
: :  at the script part too but relearning perl provides too high 
: :  an obstacle timewise.  It seems that the direction is to move
: :  the scripts to R anyways so I assume perl will ultimately be 
: :  eliminated from the R distribution simplifying this sort of 
: :  maintenance.
: :  
: :  __
: :  R-devel at stat.math.ethz.ch mailing list
: :  https://stat.ethz.ch/mailman/listinfo/r-devel
: :  
: : 
: : 
: : __
: : R-devel at stat.math.ethz.ch mailing list
: : https://stat.ethz.ch/mailman/listinfo/r-devel
: : 
: :
: 
: __
: R-devel at stat.math.ethz.ch mailing list
: https://stat.ethz.ch/mailman/listinfo/r-devel
: 
:

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch

Re: [Rd] Windows vignettes, shQuote, texi2dvi

2004-12-30 Thread Gabor Grothendieck

From:   Uwe Ligges [EMAIL PROTECTED]
 Gabor Grothendieck wrote:
 
  I noticed a shQuote fix for Windows in the svn logs.
  Just wanted to point out that this will favorably 
  affect texi2dvi on Windows which previously used
  UNIX quoting and so generated an incorrect Windows
  command. (Note that texi2dvi is used when creating
  vignettes.)
  
  Another problem is that the recommended tex
  distribution for Windows, fptex, does not have texi2dvi
  in the first place. The alternative, MiKTeX, is harder
  to install
 
 I heared some people think so, but apparently it is not a big problem.

I installed both so I know from experience that MikTex was 
harder to install.   

The problem is that its a lot of work to set up a package
building environment on Windows. There are numerous systems to 
install, TeX, UNIX tools, perl, Microsoft Help plus path problems.   

MikTex is just one
more obstacle.  MikTex seems like its a slicker distribution
package so its too bad its more work to set up.  The main
problems with fptex are not configuration but that (1) the
package is so large that it takes a long time to download and
(2) its missing crucial elements, viz. texi2dvi.

 
 
  than fptex but instructions are available
  and it does have an equivalent to texi2dvi called texify.exe;
  however, R still does not know about texify.exe 
  resulting in additional installation hassles,
  viz. setting the texi2dvi option or setting up a texi2dvi.bat
  file yourself that calls texify.exe or just forgetting
  about texi2dvi and manually running the necessary tex commands
  when creating vignettes on Windows.
 
 texi2dvi is also available in MikTeX, see package miktex-texinfo-bin.
 
 See also ?texi2dvi in R to see how you can set something different, e.g. 
 texify, which is explicitly given in the examples.
 
 
 
  There are also additional problems related to using the
  perl package-building scripts in Windows. I can create the
  zoo vignette using:
  
  setwd(...whatever...)
  Sweave(zoo.Rnw)
  texi2dvi(zoo.tex)
  
  provided I modify the source to texi2dvi to use the
  appropriate quoting but even with that modification,
  the perl package build scripts only build
  the vignette as far as the zoo.tex file and I must
  do the rest by hand so there is some problem with
  the scripts or in how they interact with R.
 
 Are you going to provide a patch?

I assume the shQuote change obviates the need for my kludge to
texi2dvi.   If it were not written in perl I would have looked
at the script part too but relearning perl provides too high 
an obstacle timewise.  It seems that the direction is to move
the scripts to R anyways so I assume perl will ultimately be 
eliminated from the R distribution simplifying this sort of 
maintenance.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Windows vignettes, shQuote, texi2dvi

2004-12-29 Thread Gabor Grothendieck

I noticed a shQuote fix for Windows in the svn logs.
Just wanted to point out that this will favorably 
affect texi2dvi on Windows which previously used
UNIX quoting and so generated an incorrect Windows
command.  (Note that texi2dvi is used when creating
vignettes.)

Another problem is that the recommended tex
distribution for Windows, fptex, does not have texi2dvi
in the first place. The alternative, MiKTeX, is harder
to install than fptex but instructions are available 
and it does have an equivalent to texi2dvi called texify.exe;
however, R still does not know about texify.exe 
resulting in additional installation hassles,
viz. setting the texi2dvi option or setting up a texi2dvi.bat
file yourself that calls texify.exe or just forgetting
about texi2dvi and manually running the necessary tex commands
when creating vignettes on Windows.

There are also additional problems related to using the
perl package-building scripts in Windows.   I can create the
zoo vignette using:

setwd(...whatever...)
Sweave(zoo.Rnw)
texi2dvi(zoo.tex)

provided I modify the source to texi2dvi to use the
appropriate quoting but even with that modification,
the perl package build scripts only build
the vignette as far as the zoo.tex file and I must
do the rest by hand so there is some problem with
the scripts or in how they interact with R.

I think the real ultimate solution is to replace the
perl scripts with R code to make them more maintainable.
This seems like the direction things are moving anyways
and I suspect that they were developed long before R
got to its current advanced state where R is just
as powerful as perl.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Importing csv files

2004-12-23 Thread Gabor Grothendieck

 
 
  My understanding is that David is not distributing dataload any more, 
  though 
  I would not like to discourage commercial vendors (such as providers of 
  Stat/Transfer and DBMSCOPY) from providing .rda output as an option. I 
  assume that new code written under GPL would not be a problem. -Frank
 
 I said `except under GPL'. I am not trying to discourage anyone, just 
 pointing out that GPL has far-ranging implications that are often 
 over-looked.
 

One way to encourage other software to provide .rda interfaces 
would be to document (or make more visible if such a 
document already exists) the C routines that read and write .rda 
files.

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R, fptex, MikTex

2004-12-20 Thread Gabor Grothendieck


According to the following Windows package building info:

   http://www.murdoch-sutherland.com/Rtools/

fptex is easier to install with the R package building tools than
MikTex.  I have been using MikTex but was thinking of switching
over to fptex to simplify my setup.  

My concern is that I have other latex files not related to R 
that use MikTex.  I am considering two situations:

1. Both. Has anyone installed both on their system?  Are 
there problems that I should know about if I do that?  
If it causes me problems will I be able to easily back out 
of them and get back to my current setup?

2. Convert to fptex. What about converting everything over 
from MikTex to fptex?  What sort of problems can I expect if
I switch?

Any advice on this?  Thanks.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] value of a loop broken by break

2004-12-19 Thread Gabor Grothendieck
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:

: 
: Gabor Grothendieck ggrothendieck at myway.com writes:
: 
:  Should the 'for' loop in the following example not return 3 rather than 2?
:  The Language Manual says that it returns the result of the last evaluated
:  statement and that would be the i before the 'break'.  'repeat' and 'while'
:  have the same behavior.
:  
:  R (for(i in 1:10) if (i==3) { i; break } else i)
:  [1] 2
: 
: Hmmm... First, let's look at some variants:
: 
:   (for(i in 1:10) {pi; if (i==3) { i; break } else 123})
: [1] 123
: 
: Notice that you're getting neither 2 nor 3.1415926, but the 123
: from the previous iteration. Similarly
: 
:   (for(i in 1:10) {pi; if (i==3) { i; break }else 123; 456})
: [1] 456
: 
: So you are getting the result of the last _completely_ evaluated
: statement (the enclosing {-statement is not completed either).
: 


This seems undesirable behavior to me.  The prototypical example
of this is searching for something and then returning it.  

I think break should be more like return:

  
for (i in 1:10) if (i==3) { i; break } else i  # returns 3
for(i in 1:10) if (i==3) break(i) else i  # same

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] More on Lazy loading errors building its package in a chroot

2004-12-18 Thread Gabor Grothendieck


From:   Dirk Eddelbuettel [EMAIL PROTECTED]

 I now have the package built in a chroot -- but at the price of setting
 'LazyLoad: no' in DESCRIPTION. 
 
 I do not quite understand why that is needed. Can someone else help? I can
 provide the following pointers for its-1.0.4
 

I have had problems on Windows in which I had to pay the
same price and it turned out that by using forward slashes
rather than backslashes in pathnames my R CMD ...  line I
circumvented it.  Now this obviously is not your problem but
it shows that such a workaround can represent unexpected
causes.

 * The file 'DESCRIPTION' has the Depends on methods, stats and Hmisc. Under 
 Debian Hmisc and acepack get properly loaded as needed to build this.
 
 * A file 'install.R' still exists from the older version. I reckon it can
 be removed, but it doesn't seem to matter either way.
 
 * The file 'NAMESPACE' is there, it contains import(), export(),
 exportClass() and exportMetods() directives.
 
 * The file R/itspkg.r has 
 .onLoad - function(lib, pkg) require(methods)
 [...]
 as.its.zoo - function(x) {
 stopifnot(require(its))
 index - attr(x, index)
 stopifnot(inherits(index, POSIXct))
 attr(x, index) - NULL
 its(unclass(x), index)
 }

The as.its.zoo function was originally part of the 'zoo'
package.   It was moved to 'its' since it seemed to make
more sense there.  The 'require(its)' line was needed when it
was part of 'zoo' but now that it is part of 'its' I think
it can be eliminated.  Check if that has any effect.

 and I have the feeling that these may interfere with the LazyLoad
 directive. But I just don't understand how something can work in 
 session with a controlling terminal, but fail in the chroot'ed batch
 built. 
 
 Comments would be highly welcome. But please speak very slowly when it comes
 to S4 and LazyLoading matters. 

There is an article on Lazy Loading in R News you may wish
to read.

 
 Thanks, Dirk
 
 
 
 On Tue, Dec 14, 2004 at 10:17:16PM -0600, Dirk Eddelbuettel wrote:
  Trying to build its_1.0.4 in a chroot environment to update the
  corresponding Debian package, I get 
  
  
  * Installing *source* package 'its' ...
  ** R
  ** inst
  ** save image
  Loading required package: Hmisc
  Hmisc library by Frank E Harrell Jr
  
  Type library(help='Hmisc'), ?Overview, or ?Hmisc.Overview')
  to see overall documentation.
  
  NOTE:Hmisc no longer redefines [.factor to drop unused levels when
  subsetting. To get the old behavior of Hmisc type dropUnusedLevels().
  
  Attaching package 'Hmisc':
  
  
  The following object(s) are masked from package:stats :
  
  ecdf
 
  Creating a new generic function for names in its
  Creating a new generic function for names- in its
  Creating a new generic function for print in its
  Creating a new generic function for start in its
  Creating a new generic function for end in its
  Creating a new generic function for summary in its
  Creating a new generic function for diff in its
  Creating a new generic function for union in its
  Creating a new generic function for intersect in its
  
  ** preparing package for lazy loading
  Error in loadNamespace(name) : There is no package called 'its'
  Execution halted
  ERROR: lazy loading failed for package 'its'
  make: *** [R_any_arch] Error 1
  pbuilder: Failed autobuilding of package
 
  
  The package installs fine when built on the command-line. This is somehow
  related to the reduced environment provided in the chroot -- I recall having
  seen (and fixed) the error when other packages where needed during build
  time. Hmisc is installed. Nothing else outside of R-base should be needed.
  
  I think I am overlooking something simple, but a couple of simple attempts
  didn't get me anywhere. The chroot isn't a problem per se as several dozen
  CRAN packages get built that way into Debian packages.
  
  Puzzled, Dirk


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] value of a loop broken by break

2004-12-18 Thread Gabor Grothendieck

Should the 'for' loop in the following example not return 3 rather than 2?
The Language Manual says that it returns the result of the last evaluated
statement and that would be the i before the 'break'.  'repeat' and 'while'
have the same behavior.


R (for(i in 1:10) if (i==3) { i; break } else i)
[1] 2

R R.version.string # Windows XP
[1] R version 2.0.1, 2004-11-04

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] HOME environment variable

2004-12-10 Thread Gabor Grothendieck

In 

http://cran.r-project.org/bin/windows/base/CHANGES.rw2010dev

there is a reference to the setting of a HOME environment
variable on Windows:

R CMD / Rcmd now set HOME if unset, e.g. for use in Rcmd check.

Was this intended to be R_HOME?  Also, I don't understand the comment.
Could someone clarify it.  What is the use in Rcmd check referring to?

Finally, just in case it really does mean HOME, I have had problems 
in the past with multiple software systems (R was not one of them) 
all using HOME differently and conflicting with each other so if this
is a new use of HOME I would recommend that it be changed to a 
name that is not likely to conflict with other pieces of software.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] regex to match word boundaries

2004-12-06 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

: 
: Can someone verify whether or not this is a bug.
: 
: When I substitute all occurrence of \\B with X
: R seems to correctly place an X at all non-word boundaries
: (whether or not I specify perl) but \\b does not seem to
: act on all complement positions:
: 
:  gsub(\\b, X, abc def) # nothing done
: [1] abc def
:  gsub(\\B, X, abc def) # as expected, I think
: [1] aXbXc dXeXf
:  gsub(\\b, X, abc def, perl = TRUE) # not as expected
: [1] abc Xdef
:  gsub(\\B, X, abc def, perl = TRUE)  # as expected
: [1] aXbXc dXeXf
:  R.version.string  # Windows 2000
: [1] R version 2.0.1, 2004-11-27

I have found another possibly related problem.  In the above 
\\B always worked as expected but not \\b.  I have an
example where \\B does not work as expected either.  Note
that in the first example below all the letters which are not
first in the word get prefaced with X as expected but in the second
case only alternate letters which are not first in the
word get replaced with X whereas one would have exptected
that all letters not first in the word get replaced with X.

R gsub(\\B, X, The Quick Brown Fox) # works as expected
[1] TXhXe QXuXiXcXk BXrXoXwXn FXoXx

R gsub(\\B., X, The Quick Brown Fox, perl = TRUE) # problem
[1] TXe QXiXk BXoXn FXx

R R.version.string # Windows XP
[1] R version 2.0.1, 2004-11-04


By the way, do I have to submit a second bug report for this or is
it possible to add this onto the previous one as a comment?

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Wishlist: simple legend options (PR#7400)

2004-12-06 Thread Gabor Grothendieck
Warnes, Gregory R gregory.r.warnes at pfizer.com writes:

: 
: I'm also glad to see this features go into the standard packages.  
: 
: I think that it may be worthwhile to regularly 'nominate' features/functions
: present in other packages for 'promotion' into the standard R packages.

That's a good idea.  I would certainly like to see the handy 
sfsmisc::glob2rx in the standard R packages.  It converts
a glob, i.e wildcard specification, into a regular expression and
therefore can be plugged into many spots in the standard R
functions.  Its body is only 2 lines long.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] seq.Date requires by

2004-12-03 Thread Gabor Grothendieck


Don't know the answer to your question but note
that chron can do it without by=.

xD - as.Date(1996-01-01)
yD - as.Date(1996-12-01)
library(chron)
xc - chron(xD)
yc - chron(yD)

seq(xc, yc)

# Also one can do this in either chron or Date:

chron(xc:yc)
as.Date(xD:yD)



Date:   Fri, 3 Dec 2004 11:29:08 -0800 
From:   Vadim Ogranovich [EMAIL PROTECTED]
To:   [EMAIL PROTECTED] 
Subject:   [Rd] seq.Date requires by 

 
Hi,

What is the reason for seq.Date to require the 'by' argument and not to
default it to 1 in the example below?


 seq(from=as.Date(1996-01-01), to=as.Date(1996-12-01))
Error in seq.Date(from = as.Date(1996-01-01), to =
as.Date(1996-12-01)) : 
exactly two of `to', `by' and `length.out' / `along.with' must be
specified

This is R-1.9.1, but I haven't seen anything pertaining in the release
notes of 2.0.x.

Thanks,
Vadim

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] seq.Date requires by

2004-12-03 Thread Gabor Grothendieck

Oops.  I had my own as.Date routines loaded.  (These
will be made available in the next version of zoo
but here they are in the meantime.)

If you want to get the as.Date(xD:yD) and similar
to come out you should define these:

as.Date.numeric - function (x, ...) 
   structure(floor(x + 0.001), class = Date)
as.Date.integer - function (x, ...) 
   structure(x, class = Date)

Date:   Fri, 3 Dec 2004 21:42:37 -0500 (EST) 
From:   Gabor Grothendieck [EMAIL PROTECTED]
To:   [EMAIL PROTECTED], [EMAIL PROTECTED] 
Subject:   RE: [Rd] seq.Date requires by 

 


Don't know the answer to your question but note
that chron can do it without by=.

xD - as.Date(1996-01-01)
yD - as.Date(1996-12-01)
library(chron)
xc - chron(xD)
yc - chron(yD)

seq(xc, yc)

# Also one can do this in either chron or Date:

chron(xc:yc)
as.Date(xD:yD)



Date: Fri, 3 Dec 2004 11:29:08 -0800 
From: Vadim Ogranovich [EMAIL PROTECTED]
To: [EMAIL PROTECTED] 
Subject: [Rd] seq.Date requires by 


Hi,

What is the reason for seq.Date to require the 'by' argument and not to
default it to 1 in the example below?


 seq(from=as.Date(1996-01-01), to=as.Date(1996-12-01))
Error in seq.Date(from = as.Date(1996-01-01), to =
as.Date(1996-12-01)) : 
exactly two of `to', `by' and `length.out' / `along.with' must be
specified

This is R-1.9.1, but I haven't seen anything pertaining in the release
notes of 2.0.x.

Thanks,
Vadim

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Wishlist: simple legend options (PR#7400)

2004-12-02 Thread Gabor Grothendieck
 epurdom at stanford.edu writes:

: 
: Full_Name: Elizabeth Purdom
: Version: 1.9.1
: OS: Windows XP
: Submission from: (NULL) (171.64.102.199)
: 
: It would be nice if legend had the option of some default locations you could
: choose instead of entering specific coordinates, like topleft,
: topright,topcenter, etc. based on par(usr) coordinates. I know I've 
wanted
: it so often I've made my own simple non-robust wrap-around, so I don't have 
to
: remember or parse the xjust and yjust options necessary to make it work. Of
: course there should be the option of entering in your own coordinates. 

Check out smartlengend in package gtools which is part of the gregmisc
bundle.  I agtee its a useful feature and it would be nice if that
feature migrated to the base graphics.

: 
: Also it would be nice to be able to put a optional title inside your legend.
: Currently I just make my title the first value in my legend vector, and then 
fix
: the other options so no symbols plot next to it. But this isn't always a 
pretty
: result and can be a pain if your symbols are complicated.
: 
: Thanks,
: Elizabeth
: 
: __
: R-devel at stat.math.ethz.ch mailing list
: https://stat.ethz.ch/mailman/listinfo/r-devel
: 
:

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] regex to match word boundaries

2004-12-01 Thread Gabor Grothendieck


Can someone verify whether or not this is a bug.

When I substitute all occurrence of \\B with X
R seems to correctly place an X at all non-word boundaries
(whether or not I specify perl) but \\b does not seem to
act on all complement positions:

 gsub(\\b, X, abc def) # nothing done
[1] abc def
 gsub(\\B, X, abc def) # as expected, I think
[1] aXbXc dXeXf
 gsub(\\b, X, abc def, perl = TRUE) # not as expected
[1] abc Xdef
 gsub(\\B, X, abc def, perl = TRUE)  # as expected
[1] aXbXc dXeXf
 R.version.string  # Windows 2000
[1] R version 2.0.1, 2004-11-27

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] a better

2004-11-30 Thread Gabor Grothendieck

 From:   Martin Maechler [EMAIL PROTECTED]
  
  Duncan == Duncan Murdoch [EMAIL PROTECTED]
  on Sun, 28 Nov 2004 10:25:24 -0500 writes:
 
 Duncan 
 Duncan 
 
 Duncan We already have code to source() from the clipboard, and it could
 Duncan address the problems above, but:
 
 Duncan - Source with echo=T doesn't echo, it deparses, so some comments are
 Duncan lost, formatting is changed, etc.
 
 yes, and we would have liked to have an alternative source()
 for a *very* long time...
 Examples where I hate the non-echo (i.e. the loss of all
 comments and own-intended formatting) is when you use it for
 demos, etc, notably in R's own demo() and example() functions.
 
 But to do this might be more tricky than at first thought:
 Of course you can readLines() the source file and writeLines()
 them to whatever your console is. The slightly difficult thing
 is to see which junks to ``send to R'' , i.e. to parse() and eval().
 The basic problem seems to see when expressions are complete.
 
 Maybe we should / could think about enhancing parse() {or a new
 function with extended behavior} such that it would not only
 return the parse()d expressions, but also indices (byte or even
 line counters) to the source text, indicating where each of the
 expression started and ended.
 
 That way I could see a way to proceed.

The other problems with source() are (1) reading from stdin so that
one can create self contained files.  (2) Even if possible, it
would also be nice to be able to put the data at the end of
the file so the code is not broken up by potentially large
stretches of data.

Number (1) can be addressed by pasting the output of dput
into your file but its not pretty.  A workaround addressing
both (1) and (2) is in:

https://stat.ethz.ch/pipermail/r-help/2003-June/033622.html

Perhaps these two considerations could be taken into account
if source() is being revisited.






 --- On Tue 11/30, Martin Maechler  [EMAIL PROTECTED]  wrote:
From: Martin Maechler [mailto: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
Date: Tue, 30 Nov 2004 12:51:12 +0100
Subject: [Rd] a better source(echo=TRUE)  {was how to pause}

 Duncan == Duncan Murdoch [EMAIL PROTECTED]br on Sun, 28 
 Nov 2004 10:25:24 -0500 writes:brbrDuncan br
 Duncan brbrDuncan We already have code to source() 
 from the clipboard, and it couldbrDuncan address the problems 
 above, but:brbrDuncan - Source with echo=T doesn't echo, it 
 deparses, so some comments arebrDuncan lost, formatting is 
 changed, etc.brbryes, and we would have liked to have an alternative 
 source()brfor a *very* long time...brExamples where I hate the 
 non-echo (i.e. the loss of allbrcomments and own-intended formatting) 
 is when you use it forbrdemos, etc, notably in R's own  demo() and 
 example() functions.brbrBut to do this might be more tricky than at 
 first thought:brOf course you can readLines() the source file and 
 writeLines()brthem to whatever your console is. The slightly difficult 
 thingbris to see which junks to ``send to R'' , i.e. to parse()!
  and eval().brThe basic problem seems to see when expressions are 
complete.brbrMaybe we should / could think about enhancing parse() {or a 
newbrfunction with extended behavior} such that it would not onlybrreturn 
the parse()d expressions, but also indices (byte or evenbrline counters) to 
the source text, indicating where each of thebrexpression started and 
ended.brbrThat way I could see a way to proceed.brbrMartinbrbr
Duncan brDuncan 
brbr__br[EMAIL PROTECTED]
 mailing listbrhttps://stat.ethz.ch/mailman/listinfo/r-develbr

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Differences between $ and [[

2004-11-29 Thread Gabor Grothendieck
Eric Lecoutre lecoutre at stat.ucl.ac.be writes:

: 
: Hi,
: 
: If I define the following list:
: 
:   (l-list(text-align=right))
: $text-align
: [1] right
: 
: I know that I can't use l$text-align, as the parser will find a '-' 
operation.
: If I want (need) to use special names, as text-align, I have to enclose 
: it between . So I can use:
: 
: l$text-align or l[[text-align]]
: 
: If now I have the text text-align defined in a variable:
: p-text-align
: 
: I can use:
:   l[[p]]
: [1] right
: 
: But I can't use l$p
: 
: where as it is said in the help page that 'x$name' is equivalent to 
: 'x[[name]]'.
: 
: Anyway I will use [[ but I dont clearly understand this behavior.

[[ evaluates its right argument and $ does not.  The ... notation is
just to allow one to specify non-syntactic arguments.  One could
alternately use l$`text-align` .  I think the ... notation
is a holdover from before `...` was implemented.

Its also possible to define your own class and have $ operate any
way you like on it (although its probably best to stick with
standard behavior and the following is not really recommended):

l-list(text-align=right, a=2)
class(l) - c(mylist, list)
$.mylist - function(x, idx) { 
 y - x[[idx]]
 if (is.null(y)) x[[eval.parent(parse(text=idx))]] else y
}
p - text-align
l$p # right
l$text-align # same
l$`text-align` # same
a - 99
l$a # 2
l$a # same
l$`a` # same
l[[a]] # same
l[[a]] # 99

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Differences between $ and [[

2004-11-29 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

: 
: Eric Lecoutre lecoutre at stat.ucl.ac.be writes:
: 
: : 
: : Hi,
: : 
: : If I define the following list:
: : 
: :   (l-list(text-align=right))
: : $text-align
: : [1] right
: : 
: : I know that I can't use l$text-align, as the parser will find a '-' 
: operation.
: : If I want (need) to use special names, as text-align, I have to enclose 
: : it between . So I can use:
: : 
: : l$text-align or l[[text-align]]
: : 
: : If now I have the text text-align defined in a variable:
: : p-text-align
: : 
: : I can use:
: :   l[[p]]
: : [1] right
: : 
: : But I can't use l$p
: : 
: : where as it is said in the help page that 'x$name' is equivalent to 
: : 'x[[name]]'.
: : 
: : Anyway I will use [[ but I dont clearly understand this behavior.
: 
: [[ evaluates its right argument and $ does not.  The ... notation is
: just to allow one to specify non-syntactic arguments.  One could
: alternately use l$`text-align` .  I think the ... notation
: is a holdover from before `...` was implemented.
: 
: Its also possible to define your own class and have $ operate any
: way you like on it (although its probably best to stick with
: standard behavior and the following is not really recommended):
: 
:   l-list(text-align=right, a=2)
:   class(l) - c(mylist, list)
:   $.mylist - function(x, idx) { 
:y - x[[idx]]
:if (is.null(y)) x[[eval.parent(parse(text=idx))]] else y
:   }
:   p - text-align
:   l$p # right
:   l$text-align # same
:   l$`text-align` # same
:   a - 99
:   l$a # 2
:   l$a # same
:   l$`a` # same
:   l[[a]] # same
:   l[[a]] # 99

The comment on the last one is wrong.  It should give an error since
there is no 99th component to l.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Version names

2004-11-28 Thread Gabor Grothendieck


I have Windows XP build scripts that look for R in a variety
of folders and if multiple ones are found, takes the last
one.  For example, I currently have the following 
in \Program Files\R :

rw1060
rw1062
rw1071
rw1071beta
rw1081
rw1090
rw1090dev
rw1091
rw1091pat
rw2000
rw2001beta

Note that it currently correctly gets rw2001beta but if I
add rw2001 then it will not pick it up since rw2001beta
would continue to sort after it.

I tried using dates in my scripts to pick out the right one
but realized that it was somewhat error prone and decided to
move back to ordering by the names.

I wonder if a naming convention could be adopted that ensured
that the names sort in order.  For example,

rw2010a - alpha, i.e. development version (previously rw2010dev)
rw2010b - beta version (previously rw2001beta)
rw2010f - final version (previously rw2010)
rw2010p - patched version (previously rw2010pat)

Other possibilities might include an ascending numeric or date suffix.

I also thought about executing 
bin\R --version 
or 
bin\Rcmd --version
in each rw folder but am not sure if this works in all versions and
if it does whether the output is always provides a comparable
format.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Version names

2004-11-28 Thread Gabor Grothendieck
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:

: 
: Gabor Grothendieck ggrothendieck at myway.com writes:
: 
:  :  rw2010a - alpha, i.e. development version (previously rw2010dev)
:  :  rw2010b - beta version (previously rw2001beta)
:  :  rw2010f - final version (previously rw2010)
:  :  rw2010p - patched version (previously rw2010pat)
:  : 
:  : That would work.  Use the rename command, and you've got it.
:  : 
:  
:  Renaming is not really workable if you are giving your scripts to others.
:  They won't want build scripts that rename their folders.  
: ..
:  Its really desirable to keep Windows batch scripts as simple as 
:  possible and such a transformation is trickier than you might
:  think in the Windows batch language. 
:  
:  I was hoping that the R team would consider a simplifying
:  change to make it easier to create build scripts.  The least such 
:  change that I can think of is to use alpha as a suffix in place of dev 
:  and to use final to suffix unsuffixed versions.  That would be enough to 
:  allow them to sort in ascending order.
:  
:  This involves no coding at all for the R team.  Just a decision
:  to regularize the naming.
: 
: In a word, no, we won't do that. I'd certainly veto it if it came to
: that. 
: 
: It is a bad idea to have scripts depending on sort order (and R core
: has learned the hard way that sort order depends on locale), and it is
: an even worse idea to modify development concepts to fit a specificic
: alphabetical order. Case in point: A development version is _not_ the
: same as an alpha version!

I am pretty sure one could come up with a naming scheme that would
work in every locale.  If we used a, b, f and p then I doubt there
are any locales in which those are not increasing.  If that's not
true we could use numbers.  I really doubt that there are locales
in which digits are mot increasing.



:

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Version names

2004-11-28 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

: 
: On Sun, 28 Nov 2004 21:11:15 + (UTC), Gabor Grothendieck
: ggrothendieck at myway.com wrote:
: 
: Renaming is not really workable if you are giving your scripts to others.
: They won't want build scripts that rename their folders.  
: 
: : Another possibility is to have your build script split up the name and
: : apply a custom sort order to it.
: : 
: : The pattern is:
: : 
: : rw + version number + suffix
: : 
: : The suffices appear in this order:  dev, alpha, beta, , pat
: : (where not all appear, e.g. dev never appears for a minor release,
: : and sometimes alpha is skipped).
: 
: Its really desirable to keep Windows batch scripts as simple as 
: possible and such a transformation is trickier than you might
: think in the Windows batch language. 
: 
: You need a better scripting language!  

Quite so but unfortunately if I want to produce a script
that has no dependencies then I need to use what is provided 
in Windows and Windows batch commands seem to be the best
way to do that.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Version names

2004-11-28 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

 
 Simon Urbanek simon.urbanek at math.uni-augsburg.de writes:
 

 : If all you want to do is to determine the current (most recently 
 : installed) R version, then all it takes is two lines of C code [just 
 : read one registry entry] - and it's at least as portable across Windows 
 : systems as a batch script, but far more flexible. (There may even be a 
 : way to get that info w/o coding at all - I'm not sure whether regedit 
 : has any batch mode or something ...).
 
 I don't think regedit has a batch mode.  e.g. regedit /? does not give help.

I looked into a bit more and some of this information is actually in 
the FAQ:


2.15 Does R use the Registry?
Not itself. 

The installers set some entries to allow uninstallation. In
addition (by default, but this can be de-selected) they set
a Registry key LOCAL_MACHINE\Software\R-core\R giving the
version and install path. Again, this is not used by R
itself, but it will be used by the DCOM interface
(http://cran.r-project.org/other-software.html). Finally, a
file association for extension .RData is set in the
Registry. 

You can add the Registry entries by running RSetReg.exe in
the bin folder, and remove them by running this with
argument /U. Note that the settings are all per machine and
not per user, and that this neither sets up nor removes the
file associations. 

Also it seems that one uses reg.exe rather than regedit.exe from 
batch files so putting all this together we get the following 
Windows XP batch statement to get the current path to the rw folder. 
It puts the path into the Rrw variable:

for /f tokens=2* %%a in (
'reg query hklm\software\r-core\r /v InstallPath') do set Rrw=%%b

The bad news is that this is not 100% guaranteed to work since, as mentioned
above, the user can deselect modification of the registry during installation
but its certainly more than sufficient for my purposes and probably most 
other purposes too.

Thanks for pointing the way.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Lightweight data frame class

2004-11-26 Thread Gabor Grothendieck
Vadim Ogranovich vograno at evafunds.com writes:

: 
: Don't know whether it will suffice. Lm() was just an example. Are you
: going to re-write lm(), e.g. lm.zoo(), to accept lists?

A previous unreleased version of zoo did hack lm but the current test
version interfaces to lm without making any changes to lm at all.  

: I am more thinking of a general purpose class that would pass wherever
: data.frame is expected.

Yes, I figured so.  The lightweight data frame idea seems neat 
but thought I would mention this, in addition, in case its germane.

: 
: Probably I need to wait until the new version of zoo comes out. At the
: very least it could be a good prototype for what I have in mind.

If you want it before then contact me offlist and I can send you 
the beta test version.

: 
: Thanks for the info,
: Vadim
: 
:  -Original Message-
:  From: r-devel-bounces at stat.math.ethz.ch 
:  [mailto:r-devel-bounces at stat.math.ethz.ch] On Behalf Of Gabor 
:  Grothendieck
:  Sent: Thursday, November 25, 2004 7:42 PM
:  To: r-devel at stat.math.ethz.ch
:  Subject: Re: [Rd] Lightweight data frame class
:  
:  Vadim Ogranovich vograno at evafunds.com writes:
:  
:  : 
:  : Hi,
:  : 
:  : As far as I can tell data.frame class adds two features to those of
:  : lists:
:  : * matrix structure via [,] and [,]- operators  (well, I 
:  know these are
:  : actually [(i, j, ...), not [,]). 
:  : * row names attribute.
:  : 
:  : It seems that the overhead of the support for the row names, both
:  : computational and RAM-wise, is rather non-trivial. I frequently
:  : subscript from a data.frame, i.e. use [,] on data frames, 
:  and my timing
:  : shows that the equivalent list operation is about 7 times 
:  faster, see
:  : below.
:  : 
:  : On the other hand, at least in my usage pattern, I really 
:  rarely benefit
:  : from the row names attribute, so as far as I am concerned 
:  row names is
:  : just an overhead. (Of course the speed difference may be 
:  due to other
:  : factors, the only thing I can tell is that subscripting is 
:  very slow in
:  : data frames relative to in lists).
:  : 
:  : I thought of writing a new class, say 
:  lightweight.data.frame, that would
:  : be polymorphic with the existing data.frame class. The class would
:  : inherit from list and implement [,], [,]- operators. It 
:  would also
:  : implement the rownames function that would return 
:  seq(nrow(x)), etc.
:  : It should also implement as.data.frame to avoid the overhead of
:  : conversion to a full-blown data.frame in calls like lm(y ~ x,
:  : data=myLightweightDataframe).
:  
:  The next version of zoo (currently in
:  test) supports lists in the data argument of lm and can also 
:  merge zoo series into a list (or to another zoo series, as it 
:  does now).
:  Would that be a sufficient alternative?
:  
:  __
:  R-devel at stat.math.ethz.ch mailing list
:  https://stat.ethz.ch/mailman/listinfo/r-devel
: 
: 
: __
: R-devel at stat.math.ethz.ch mailing list
: https://stat.ethz.ch/mailman/listinfo/r-devel
: 
:

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] str() resets class for environments

2004-11-25 Thread Gabor Grothendieck
Henrik Bengtsson hb at maths.lth.se writes:

: 
:  -Original Message-
:  From: r-devel-bounces at stat.math.ethz.ch 
:  [mailto:r-devel-bounces at stat.math.ethz.ch] On Behalf Of 
:  Mark.Bravington at csiro.au
:  Sent: Wednesday, November 24, 2004 1:43 AM
:  To: hb at maths.lth.se
:  Cc: r-devel at stat.math.ethz.ch
:  Subject: RE: [Rd] str() resets class for environments
:  
:  
:  Henrik Bengtsson wrote:
:  
:   Should attr()-, attributes()-,
:   class()- give an error when applied to an environment? I see
:   no reason why
:   not.
:  
:  It would break the workspace-organization code in the 
:  'mvbutils' package, which relies on being able to set and 
:  unset attributes of environments on the search path 
:  (specifically, the 'name' and 'path' attributes).
:  
:  So personally I'd much prefer not to have this happen! For 
:  the wider R community, I'm not sure how many users the 
:  'mvbutils' package has, but I think it's a fair number 
:  judging from emails I get. 
:  
:  If you do feel the extra security is vitally important, 
:  perhaps there could be 'lock.attributes' and 
:  'unlock.attributes' functions for environments. The idea 
:  would be that each environment has an invisible (i.e. 
:  internal) mock-attribute locked, which would be TRUE by 
:  default (on creation of the environment). While locked is 
:  TRUE, any attempt to muck about with the environment's 
:  attributes would cause an error. But if you really did need 
:  to change attributes of that , it would still be possible by 
:  calling 'unlock.attributes' first.
: 
: Then I agree with you that it should not be made defunct; in general, if
: there is a rational for using a feature in R and if someone relies on it,
: then I believe one should be careful and consider alternatives before
: removing it. I hope I did not break your code now by bringing it up to the R
: core team.
: 
: I am curious though, do you not run into problems by setting and getting
: attributes on environment in 'mvbutils'? The example of John Chambers I
: re-posted, which shows that attributes can (will?) get killed by operating
: on the [environment] object locally in a function, suggests that you
: will.
: 

The Chambers example simply shows what can happen now -- not what should
happen.  The real question is what should happen.If all objects
can have classes and attributes and if environments are objects then
it follows that environments should be able to have classes and
attributes too.  If the language is to be made irregular in this
respect then there needs to be a very good reason and none has
been put forth, at least in this thread.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] str() resets class for environments

2004-11-25 Thread Gabor Grothendieck
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:

: 
: Gabor Grothendieck ggrothendieck at myway.com writes:
: 
:  : I am curious though, do you not run into problems by setting and getting
:  : attributes on environment in 'mvbutils'? The example of John Chambers I
:  : re-posted, which shows that attributes can (will?) get killed by 
operating
:  : on the [environment] object locally in a function, suggests that you
:  : will.
:  : 
:  
:  The Chambers example simply shows what can happen now -- not what should
:  happen.  The real question is what should happen.If all objects
:  can have classes and attributes and if environments are objects then
:  it follows that environments should be able to have classes and
:  attributes too.  If the language is to be made irregular in this
:  respect then there needs to be a very good reason and none has
:  been put forth, at least in this thread.
: 
: Environments are already irregular in that they are never duplicated
: (which is what causes these issues in the first place). External
: pointers have the same feature, and Luke Tierney has suggested that
: they perhaps should be wrapped in an object with more normal
: semantics. Perhaps we should consider doing likewise with
: environments? 

To me that would make sense in keeping the rules of the language
more consistent.  The mvbutils example suggests that it also has
uses in addition to regularity and additional ones may come to
light too.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] str() resets class for environments

2004-11-25 Thread Gabor Grothendieck
 Mark.Bravington at csiro.au writes:

: 
:  : Henrik Bengtsson:
:  :  : I am curious though, do you not run into problems by 
:  setting and getting
:  :  : attributes on environment in 'mvbutils'? The example of 
:  John Chambers I
:  :  : re-posted, which shows that attributes can (will?) get 
:  killed by 
:  operating
:  :  : on the [environment] object locally in a function, 
:  suggests that you
:  :  : will.
:  :  : 
:  :  
: 
: In my code, I do want to permanently set the global attributes of an 
environment such as pos.to.env(1)
: from within a function, so the behaviour of [in effect] 'attr-.env' is not 
at all problematic :)
: 
: [In old versions of R, I used to have to invoke a function 
called 'lib.fixup' after setting an attribute of a
: search path environment, to make sure it really happened, but this has 
been unnecessary in recent versions.]
: 
:  :Peter Dalgaard p.dalgaard at biostat.ku.dk writes:
:  : Environments are already irregular in that they are never duplicated
:  : (which is what causes these issues in the first place). External
:  : pointers have the same feature, and Luke Tierney has suggested that
:  : they perhaps should be wrapped in an object with more normal
:  : semantics. Perhaps we should consider doing likewise with
:  : environments? 
:  
:  Gabor Grothendieck ggrothendieck at myway.com writes:
:  To me that would make sense in keeping the rules of the language
:  more consistent.  The mvbutils example suggests that it also has
:  uses in addition to regularity and additional ones may come to
:  light too.
:  
: 
: If environments were wrapped, would it still be possible to change the 
attributes of e.g. pos.to.env (5),
: as opposed to the attributes of a copy? That's the feature I'm keen to 
retain. For example:
: 
:  e5 - pos.to.env( 5)
:  attr( e5, 'my.attr') - 'whatever'
:  # If 'e5' is a wrapper, then this will no longer change the attribute 
of 'pos.to.env( 5)'
:  attr( pos.to.env( 5), 'my.attr')
: NULL
:  # So I would need to be able to do something ugly like this:
:  pos.to.env( 5) - e5  # Yuk!?
: 
: Wrapping environments does avoid the top level of nonduplication, but the 
deeper level of
: nonduplicating the contents is unavoidable (and that's the point, of 
course): 
: 
:  pos.to.env(5)$x
: NULL
:  e5$x - 0
:  pos.to.env(5)$x
: [1] 0
: 
: So I'm not sure all this can ever be made entirely clean, even if 
automatic wrapping did get used. Equally,
: though, I'm not sure I've correctly understood the intention.


I don't know what the right answer should be but right now
if e1 == e2 for two environments then those environments are 
the same since environments are just pointers.

1. If attributes and classes were somehow stored in the 
environment itself (which is what you are looking for
I think) then the above condition would also
be sufficient to guarantee that they have the same class
and attributes.

2. If attributes and classes are associated with a wrapper
then equality requires that the underlying environments
are the same and also requires that the classes and
attributes of the two wrappers be the same.

Both these viewpoints have a certain sense to them in
that the first is more R-like but the second is
more environment-like.

3. The current situation where environments cannot
have classes and attributes does have the advantage
that one need not decide between the two approaches.

I guess I am arguing, in part, against what I said
before but in thinking about this more I realize
that all three viewpoints seem to have some merit
and its actually quite difficult to choose.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Lightweight data frame class

2004-11-25 Thread Gabor Grothendieck
Vadim Ogranovich vograno at evafunds.com writes:

: 
: Hi,
: 
: As far as I can tell data.frame class adds two features to those of
: lists:
: * matrix structure via [,] and [,]- operators  (well, I know these are
: actually [(i, j, ...), not [,]). 
: * row names attribute.
: 
: It seems that the overhead of the support for the row names, both
: computational and RAM-wise, is rather non-trivial. I frequently
: subscript from a data.frame, i.e. use [,] on data frames, and my timing
: shows that the equivalent list operation is about 7 times faster, see
: below.
: 
: On the other hand, at least in my usage pattern, I really rarely benefit
: from the row names attribute, so as far as I am concerned row names is
: just an overhead. (Of course the speed difference may be due to other
: factors, the only thing I can tell is that subscripting is very slow in
: data frames relative to in lists).
: 
: I thought of writing a new class, say lightweight.data.frame, that would
: be polymorphic with the existing data.frame class. The class would
: inherit from list and implement [,], [,]- operators. It would also
: implement the rownames function that would return seq(nrow(x)), etc.
: It should also implement as.data.frame to avoid the overhead of
: conversion to a full-blown data.frame in calls like lm(y ~ x,
: data=myLightweightDataframe).

The next version of zoo (currently in 
test) supports lists in the data argument of lm
and can also merge zoo series into a list (or
to another zoo series, as it does now).
Would that be a sufficient alternative?

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Suggestions for packages / help / index (long mail)

2004-11-24 Thread Gabor Grothendieck
Eric Lecoutre lecoutre at stat.ucl.ac.be writes:

: 6. Final point has already been discussed in the past. It is about misc 
: packages and pieces of code. I propose the creation of 5 packages:
:   - miscGraphics (keywords: misc, Graphics)
:   - miscStatistics (keywords: misc, Statistics)
:   - miscMathematics (keywords: misc, Mathematics)
:   - miscBasics (keywords: misc, Basics)
:   - miscProgramming (keywords: misc, Programming)

Rather than preset the categories perhaps evolving them would
be better, just starting out with a single Misc package and then 
decomposing it into multiple packages as the categories become
clear.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Suggestions for packages / help / index (long mail)

2004-11-24 Thread Gabor Grothendieck
Eric Lecoutre lecoutre at stat.ucl.ac.be writes:

: 
: At 15:06 24/11/2004, Gabor Grothendieck wrote:
: Eric Lecoutre lecoutre at stat.ucl.ac.be writes:
: 
: : 6. Final point has already been discussed in the past. It is about misc
: : packages and pieces of code. I propose the creation of 5 packages:
: :   - miscGraphics (keywords: misc, Graphics)
: :   - miscStatistics (keywords: misc, Statistics)
: :   - miscMathematics (keywords: misc, Mathematics)
: :   - miscBasics (keywords: misc, Basics)
: :   - miscProgramming (keywords: misc, Programming)
: 
: Rather than preset the categories perhaps evolving them would
: be better, just starting out with a single Misc package and then
: decomposing it into multiple packages as the categories become
: clear.
: 
: Those categories are taken from KEYWORDS (master entries). I guess it 
: wouldn't be difficult to still have substancial entries for those packages, 
: if some misc package maintainer would make the job to break their package 
: into pieces. BTW, I have to admit this choice is not easy to make for 
: several reasons, the main one beeing to keep the ability to modify one's 
: own contributions.
: For those packages, a collaborative plattform such as SourceForge and so 
: on, with Sync-ability, could be a good choice.
: 
: Eric

Sorry, I did not understand the keyword connection you were making.
My comment was based on the 80/20 idea that if 80% of the software
gets contributed to 20%, i.e. one of the packages, then perhaps having
5 is superfluous.  If the categories are made afterwards, rather than
before, one can construct them to esnure a more even number of routines.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore (PR#7379)

2004-11-21 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

 
 Duncan Murdoch murdoch at stats.uwo.ca writes:
 
 : 
 : On Sat, 20 Nov 2004 03:47:50 + (UTC), Gabor Grothendieck
 : ggrothendieck at myway.com wrote:
 : 
 : Duncan Murdoch murdoch at stats.uwo.ca writes:
 : 
 :  
 :  On Sat, 20 Nov 2004 00:39:17 + (UTC), Gabor Grothendieck
 :  ggrothendieck at myway.com wrote:
 :  
 :  : Even with this change, Rcmd check is still going to install the files
 :  : it's supposed to ignore, because it uses Rcmd INSTALL, and there's no
 :  : .Rbuildignore support there.
 :  : 
 :  
 :  If the behaviour is suddenly changed then this is going to cause work
 :  for people whose scripts depend on the current behavior. 
 :  
 :  Yes, that's normal.   If you work around a bug and the bug gets fixed,
 :  then you will need to change your code.  That's why the NEWS file
 :  reports bug fixes and other changes.



Assuming that the behavior of the R CMD commands is changed
so that they respect .Rbuildignore but that no flag or
switch is immediately implemented to turn on and off
.Rbuildignore processing I have the following workaround
to allow the user to implement either situation.
It is simple enough that it does not require a script.  
I have not actually tried it out yet since the changed
commands are not yet available.

It applies to Windows or UNIX but is described below relative
to Windows and to an assumed folder tree called mypkg
which is the folder that contains .Rbuildignore.

1. in Windows Explorer create a second folder called
mypkg-shortcuts

2. select all files and folders directly within mypkg except
.Rbuildignore and right-click drag them to mypkg-shortcuts
selecting Create ShortCut after letting up on the right
mouse button.  That will create a shortcut to each file and 
folder in mypkg.  (Note that the files and folders dragged
include DESCRIPTION, R, man, etc. directly in mypkg but 
not the .files another level deep such as the .R and .Rd
files within R and man.)

3. now run off mypkg if the .Rbuildignore version is wanted
and run off mypkg-shortcuts if not.  Any time a file
in mypkg is changed in mypkg it will automatically change in
mypkg-shortcuts since they are all links.  (Special care
should be taken if files are added or deleted immediately
within mypkg; however, this is relatively rare since no
source or documentation is there.)

Note that the R and man folders within mypkg-shortcuts are
themselves links to the corresponding folders in mypkg so
that any change that occurs in mypkg\R and mypkg\man will
automatically be reflected in the mypkg-shortcuts versions.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore (PR#7379)

2004-11-21 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

: 
: On Mon, 22 Nov 2004 01:54:53 + (UTC), Gabor Grothendieck
: ggrothendieck at myway.com wrote:
: 
: Assuming that the behavior of the R CMD commands is changed
: so that they respect .Rbuildignore but that no flag or
: switch is immediately implemented to turn on and off
: .Rbuildignore processing
:  I have the following workaround
: to allow the user to implement either situation.
: It is simple enough that it does not require a script.  
: I have not actually tried it out yet since the changed
: commands are not yet available.
: 
: It applies to Windows or UNIX but is described below relative
: to Windows and to an assumed folder tree called mypkg
: which is the folder that contains .Rbuildignore.
: 
: 1. in Windows Explorer create a second folder called
: mypkg-shortcuts
: 
: 2. select all files and folders directly within mypkg except
: .Rbuildignore and right-click drag them to mypkg-shortcuts
: selecting Create ShortCut after letting up on the right
: mouse button.  That will create a shortcut to each file and 
: folder in mypkg.  (Note that the files and folders dragged
: include DESCRIPTION, R, man, etc. directly in mypkg but 
: not the .files another level deep such as the .R and .Rd
: files within R and man.)
: 
: I don't think it will work in Windows.  Windows shortcuts aren't soft
: links, they're just files that the shell sometimes interprets that
: way.  Most other programs just see them as regular files, and won't
: know to use them as links.


Good point although I think it could still be made to work with
Folder Shortcuts (not to be confused with Shortcuts to Folders)
which are real folders that point to other folders and are
described at the thread at:

http://www.annoyances.org/exec/forum/winxp/1076436311

Folder Shortcuts are not the same as Shortcuts to
Folders.  For example, if you delete a Folder Shortcut it 
deletes the target folder so you have to undo it in by
reversing the creation procedure.

What one would do is to copy the files directly under mypkg,
i.e. DESCRIPTION, etc.  to a new folder mypkg2.  These
files don't normally change much anyways as they do not include
either the source or the documentation so it should be ok in
most situations that they are copied rather than linked. Then 
create folder shortcuts as described in the link in mypkg2 which 
point to the corresponding folders in mypkg.

In a small test I found that the Folder Shortcut worked with a 
small gawk program while the Shortcut to the Folder did not.

Unfortunately this all increases complexity so one might
have to go back to creating a script after all.Thus it
would be nice if there were some built in way to turn it on
and off.

Of course the best solution would be to be able to avoid this
all in the first place and if it came to this I would probably
just make copies since I already have scripts that do that.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] strptime() bugs?

2004-11-21 Thread Gabor Grothendieck
Juan Santiago Ramseyer juan_sr at uol.com.br writes:

: 
: strptime return wrong data?, look the R session.
: 
: # datetxt: vector of date in text format
:  datetxt - c(1939-Oct-06 00:00:00, 1939-Oct-07 00:00:00, 
:1939-Oct-08 00:00:00, 1939-Oct-09 00:00:00)
: 
:  datehour - strptime(datetxt,format= %Y-%b-%d %H:%M:%S)
:  datehour
: [1] 1939-10-06 00:00:00 1939-10-07 00:00:00 1939-10-08 01:00:00
: [4] 1939-10-09 00:00:00
: 
: # look value of datehour[3] element, where hour output equal 01:00:00,
: # but hour input equal 00:00:00.
: 

On my machine it comes out like this:

R datetxt - c(1939-Oct-06 00:00:00, 1939-Oct-07 00:00:00, 
+ 1939-Oct-08 00:00:00, 1939-Oct-09 00:00:00)
R datehour - strptime(datetxt,format= %Y-%b-%d %H:%M:%S)
R datehour
[1] 1939-10-06 1939-10-07 1939-10-08 1939-10-09
 
R R.version.string # Windows XP
[1] R version 2.0.1, 2004-11-04

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore (PR#7379)

2004-11-19 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

: 
: On Thu, 18 Nov 2004 19:32:25 +0100, Martin Maechler
: maechler at stat.math.ethz.ch wrote:
: 
:  Duncan == Duncan Murdoch murdoch at stats.uwo.ca
:  on Thu, 18 Nov 2004 16:36:03 +0100 (CET) writes:
: 
: Duncan On Thu, 18 Nov 2004 00:38:47 + (UTC), Gabor
: Duncan Grothendieck ggrothendieck at myway.com wrote :
: 
:  DIFFERENCE BETWEEN USING .RBUILDIGNORE AND NOT
:  
:  The reason that the processing is different according to
:  whether one uses .Rbuildignore or not is that R CMD build
:  takes the .Rbuildignore file into account but R CMD
:  install R CMD check R CMD build --binary do not take
:  .Rbuildignore into account.
: 
: Duncan Okay, now I understand.  I think I'd call the last
: Duncan of those a bug, and it would seem to me that the
: Duncan install and check scripts should also respect this
: Duncan directive.  I've now copied this to the r-bugs list.
: 
: Duncan (This was reported for Windows; I don't know if it
: Duncan applies to other platforms as well.)
: 
: Yes it does (*), but I think it is NOT a bug but a feature,
: at least for check and install (*) 
: and very much desired in some cases :
: 
: For instance, the package developer may want more
: regression tests in Pkg/tests/ :
: 
: 1) Have extra *.Rout.save files that are architecture
:dependent and hence not for general distribution of the
:package, but important for the package developer in order to
:assure that functionality doesn't change when the package is
:extended, reorganized, 
: 
: 2) Have more  tests/*.R  files  that take a long time to run.
:Time that the package developer wants to spend, but doesn't
:dare to put on the daily CRAN or Bioconductor test suites.
: 
: 3) similarly for vignettes
: 
: 4) similar issues for experimental  R/*.R files  or man/*.Rd
:files for these.
: 
: One I thing that would be quite useful and would even solve
: Gabor's problem: 
: The introduction of a new command line switch, say --build-ignore,
: to the commands 'R CMD check' and 'R CMD install'
: 
: Shouldn't that option be --ignore-build-ignore? 
: 
: More seriously:  I suspect that an on/off switch for the .Rbuildignore
: file wouldn't be sufficiently flexible: the same package author might
: want different things at different times, e.g. building a binary for
: Windows would include different files than a binary for another
: platform.  We could put together some sort of system of conditionals
: within the .Rbuildignore file, but I think it would be better to just
: advise such package writers to maintain one or more .Rbuildignore.in
: files, and manually (or by makefile) copy the appropriate one in place
: first.

That would be useful if there are several excluded files: a.R, b.R
and c.R and one wants to force inclusion of just a.R and b.R, say, but
continue to exclude c.R.   One could make a temporary copy
of .Rbuildignore, edit it to no longer exclude a.R and then use
R CMD as usual with the new switch/flag.

During development this could be quite useful as one tests out
different combinations in the presence or absence of other
files.

An even more elaborate design of this would be to have --include= and 
--exclude= switches/flags in which one can specify specific patterns
to include or exclude or @myfile to get them from a file.

Some typical examples might look like:
[EMAIL PROTECTED] (gets file from same directory as .Rbuildignore)
[EMAIL PROTECTED]   (gets file from current directory
[EMAIL PROTECTED](gets file from root directory)
--include=a.R|b.R   (excludes all in .Rbuildignore except a.R and b.R)
--exclude=.Rbuildignore (this is the default)

Not sure how much work that is to implement but I can see it being
useful.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore (PR#7379)

2004-11-19 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

 
 On Sat, 20 Nov 2004 00:39:17 + (UTC), Gabor Grothendieck
 ggrothendieck at myway.com wrote:
 
 : Even with this change, Rcmd check is still going to install the files
 : it's supposed to ignore, because it uses Rcmd INSTALL, and there's no
 : .Rbuildignore support there.
 : 
 
 If the behaviour is suddenly changed then this is going to cause work
 for people whose scripts depend on the current behavior. 
 
 Yes, that's normal.   If you work around a bug and the bug gets fixed,
 then you will need to change your code.  That's why the NEWS file
 reports bug fixes and other changes.
 
  In order to
 minimize disruption I would ask that such change only be made at the
 same time that a flag for turning on and off .Rbuildignore processing
 is implemented on build, check, install and build --binary.  
 
 There's a simple workaround to turn .Rbuildignore processing off: just
 rename the file.  Adding a switch is *not* a prerequisite for the
 other changes.
 
 Even
 with such a flag it may require revision to scripts but at least
 any change with the flag will be minimal.  Even better, it may
 mean some scripts can be eliminated.
 
 There are 3 changes that I would contemplate:
 
 1.  Fix the bug that means R CMD check looks in the wrong place for
 .Rbuildignore.
 
 2.  Make R CMD build --binary consistent with R CMD build in its
 handling of .Rbuildignore.
 
 3.  Make R CMD install and R CMD check consistent with R CMD
 build in their handling of .Rbuildignore.
 
 Number 1 should definitely be fixed in the patches to 2.0.1.  I have a
 feeling that both 2 and 3 should be done (and 2 would be an automatic
 consequence of 3 unless we took action to stop it), but I'd put them
 in 2.1.0, not 2.0.x.
 
 Martin and you have also suggested 
 
 4.  Add another flag to Rcmd build (and install and check?), to turn
 .Rbuildignore processing on and off, for increased flexiblity or for
 backward compatiblity.
 
 My own feeling is that this doesn't increase flexibility enough, and
 I'd like a better solution, but we've got lots of time before 2.1.0 is
 released to discuss this.

I did not know it was a bug and even you did not realize it until you
looked at the code.

I do have one suggestion that might be trivial for you yet be beneficial
for everyone else, as an interim step, until a better solution comes about.

After fixing the scripts, could you leave the old scripts in bin 
with new names, e.g. oldbuild, oldcheck, etc. so that one can issue
the command:

   R CMD oldbuild ...

and get the old behavior.  That provides a simple way to get either
behavior while waiting for the ultimate solution and does not interfere
with the new scripts in any way.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore (PR#7379)

2004-11-19 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

: 
: On Sat, 20 Nov 2004 03:47:50 + (UTC), Gabor Grothendieck
: ggrothendieck at myway.com wrote:
: 
: Duncan Murdoch murdoch at stats.uwo.ca writes:
: 
:  
:  On Sat, 20 Nov 2004 00:39:17 + (UTC), Gabor Grothendieck
:  ggrothendieck at myway.com wrote:
:  
:  : Even with this change, Rcmd check is still going to install the files
:  : it's supposed to ignore, because it uses Rcmd INSTALL, and there's no
:  : .Rbuildignore support there.
:  : 
:  
:  If the behaviour is suddenly changed then this is going to cause work
:  for people whose scripts depend on the current behavior. 
:  
:  Yes, that's normal.   If you work around a bug and the bug gets fixed,
:  then you will need to change your code.  That's why the NEWS file
:  reports bug fixes and other changes.
:  
:   In order to
:  minimize disruption I would ask that such change only be made at the
:  same time that a flag for turning on and off .Rbuildignore processing
:  is implemented on build, check, install and build --binary.  
:  
:  There's a simple workaround to turn .Rbuildignore processing off: just
:  rename the file.  Adding a switch is *not* a prerequisite for the
:  other changes.
:  
:  Even
:  with such a flag it may require revision to scripts but at least
:  any change with the flag will be minimal.  Even better, it may
:  mean some scripts can be eliminated.
:  
:  There are 3 changes that I would contemplate:
:  
:  1.  Fix the bug that means R CMD check looks in the wrong place for
:  .Rbuildignore.
:  
:  2.  Make R CMD build --binary consistent with R CMD build in its
:  handling of .Rbuildignore.
:  
:  3.  Make R CMD install and R CMD check consistent with R CMD
:  build in their handling of .Rbuildignore.
:  
:  Number 1 should definitely be fixed in the patches to 2.0.1.  I have a
:  feeling that both 2 and 3 should be done (and 2 would be an automatic
:  consequence of 3 unless we took action to stop it), but I'd put them
:  in 2.1.0, not 2.0.x.
:  
:  Martin and you have also suggested 
:  
:  4.  Add another flag to Rcmd build (and install and check?), to turn
:  .Rbuildignore processing on and off, for increased flexiblity or for
:  backward compatiblity.
:  
:  My own feeling is that this doesn't increase flexibility enough, and
:  I'd like a better solution, but we've got lots of time before 2.1.0 is
:  released to discuss this.
: 
: I did not know it was a bug and even you did not realize it until you
: looked at the code.
: 
: I do have one suggestion that might be trivial for you yet be beneficial
: for everyone else, as an interim step, until a better solution comes about.
: 
: After fixing the scripts, could you leave the old scripts in bin 
: with new names, e.g. oldbuild, oldcheck, etc. so that one can issue
: the command:
: 
:R CMD oldbuild ...
: 
: and get the old behavior.  That provides a simple way to get either
: behavior while waiting for the ultimate solution and does not interfere
: with the new scripts in any way.
: 
: I think you misunderstand the consequences of fixing the bug.  If I
: fix #1, it should not break any scripts.  It will just stop Rcmd
: check from giving a few false alarms about files that you didn't want
: to distribute anyways.  Those files will still be installed in the
: temporary directory for the checks to run.  

Perhaps I used the wrong words.  I do checks with and without 
.Rbuildignore processing so having both facilities is convenient.

: 
: It is only changes #2 and #3 that would potentially break scripts,
: which is why I'd save them for 2.1.0. 

I agree that that two stage approach would reduce the extent of the 
problem of change in behavior.

: 
: As to the suggestion of leaving both versions of the scripts in place:
: no, I wouldn't do that.  There's nothing to stop you from copying the
: script from your old version to the new one (or editing the new one to
: do something completely different, for that matter).  But if I were to
: add three new scripts to the collection, I'd have to document them,
: and people would have to maintain them.  All in all, a big waste of
: our time to save a little bit of yours?  No thanks.

You don't need to document deprecated code.  Its just there for the
convenience of the user base that has invested in it.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore (PR#7379)

2004-11-18 Thread Gabor Grothendieck


 murdoch at stats.uwo.ca writes:

: 
: On Thu, 18 Nov 2004 00:38:47 + (UTC), Gabor Grothendieck
: ggrothendieck at myway.com wrote :
: 
: DIFFERENCE BETWEEN USING .RBUILDIGNORE AND NOT
: 
: The reason that the processing is different according to whether one
: uses .Rbuildignore or not is that 
:   R CMD build 
: takes the .Rbuildignore file into account but 
:   R CMD install
:   R CMD check
:   R CMD build --binary 
: do not take .Rbuildignore into account.  
: 
: Okay, now I understand.  I think I'd call the last of those a bug, and
: it would seem to me that the install and check scripts should also
: respect this directive.   I've now copied this to the r-bugs list.
: (This was reported for Windows; I don't know if it applies to other
: platforms as well.)
: 
: Just for clarification:  I can see you would use this when you have
: S-PLUS code in the same directory as R code, and you don't want to
: include that in an R build.  Are there other files that must be
: excluded?

My primary directory is mirrored by svn and I put everything
in there that I want other developers to be able to see,
even if its not ready or not intended for inclusion in the
build.  

The .Rbuildignore is important to distinguish what is to be
built and what is just there to be shared.

Some instances where this useful are:
- have abc.R but don't want it in build until abc.Rd is available
- have abc.Rd but don't want it in build until abc.R is written
- abc.R is available but too buggy yet it needs to be shared with
  others so they know about it
- interface code which interfaces with package X and will 
  ultimately go into package X, not this one, but needs to be
  around in the meantime
- code which depends on a package that we don't want to invoke
  for now and will probably be replaced anyways
- certain notes to be shared among the developers

Also it would really be nice to be able to turn the
.Rbuildignore on and off using a command line switch on 
R CMD ... as someone on the thread has already suggested.  It
can be useful to check and install the ignored stuff to
ensure its ready before actually removing it from
.Rbuildignore.  It might even be nice to be able to specify
alternate .Rbuildignore files although I am not sure whether
that is going too far or not.

What I am doing in the meantime is that I have a Windows
batch script that can be used like this:

   myscript install mypkg
   myscript check mypkg
   myscript binary mypkg

where each of these first does a build creating a .tar.gz
file and then detars it into another directory and invokes
install, check or build --binary on the built copy of the
source in that directory.

If I want to do an install, check or build --binary without
respecting the .Rbuildignore then I just use the original
R CMD ... commands instead of my script.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Re: R/S-related projects on Sourceforge? Trove Categorization

2004-11-17 Thread Gabor Grothendieck
Witold Eryk Wolski [EMAIL PROTECTED] writes:

: 
: Hi R-Users and Developers,
: 
: Several months ago I made a request on Sourceforge to add the R/S - 
: programming language to the _Trove_ categorization. (The Trove is a 
: means to convey basic metainformation about your project.)

It might be worthwhile to consider all the possible sourceforge-like
sites in case there is a particular advantage of one over the others.
For example, it might be better to choose one with svn support than
one that only has cvs support. Sites include sourceforge, savannah, 
berlios, seul and a few others, some of which are listed at
http://www.ibiblio.org/fosphost/exhost.htm .

Also some languages have started their own sourceforge-style
project repositories such as luaforge.net and rubyforge.org and one
could conceivably have an Rforge too.

I don't know enough about the above sites to say much more but I think
the key requirements are to provide support to addon packages and
projects to encourage the current explosion of such software.
The idea would be to automate the setup of an infrastructure that
includes such things as:

- X-help and X-devel email distribution lists for project X
- access to svn so developers can coordinate their efforts on a project
- wiki, forums, other groupware
- web space for a home page
- ftp space for archiving old versions and distributing beta versions
- slick package development tools to make it easier to develop packages
- ensure that there are a really good set of base classes in the core
  for projects to leverage off of

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Building Packages on Windows using .Rbuildignore

2004-11-17 Thread Gabor Grothendieck

I have some questions about building packages in Windows 
when using .Rbuildignore .  The part of the process that
is of interest here is the part that creates the source
tree from the tree that contains the .Rbuildignore file.
That is, the part of the process that does a build of
the original tree creating a .tar.gz and then extracts
this file into a source directory that can be used by
check, build --binary and install.

1. makefiles

I was developing some batch files for myself using R CMD ...
but then noticed the Windows makefiles.

- Do the makefiles take into account the situation above?

- If so, do I need to do anything special to use them this way?

2. configure.win

Also I noticed reference to configure.win in the Extensions
manual and am not sure if that can be used to address this.
I tried creating a one line configure.win which just does
an echo of a fixed string echo XYZ but I did not see XYZ
during my R CMD install.

- Is configure.win supposed to be written using Windows batch commands
  or UNIX-like commands?

- where does it go.  (I tried putting it in the same directory as
  my DESCRIPTION file.)

- is it a suitable device for handling the .Rbuildignore issue?

Thanks.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore

2004-11-17 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

: 
: On Wed, 17 Nov 2004 18:10:20 + (UTC), Gabor Grothendieck
: ggrothendieck at myway.com wrote :
: 
: 
: I have some questions about building packages in Windows 
: when using .Rbuildignore .  The part of the process that
: is of interest here is the part that creates the source
: tree from the tree that contains the .Rbuildignore file.
: That is, the part of the process that does a build of
: the original tree creating a .tar.gz and then extracts
: this file into a source directory that can be used by
: check, build --binary and install.
: 
: 1. makefiles
: 
: I was developing some batch files for myself using R CMD ...
: but then noticed the Windows makefiles.
: 
: - Do the makefiles take into account the situation above?
: 
: - If so, do I need to do anything special to use them this way?
: 
: I don't think I understand exactly what you're asking.  

If one does not use .Rbuildignore then with a pure R package
one can run off the original source files, in this case
in /wherever/mypkg, like this:
...without .Rbuildignore ...
R CMD install /wherever/mypkg
R CMD check /wherever/mypkg
R CMD build /wherever/mypkg
R CMD build /wherever/mypkg --binary

but if one DOES use .Rbuildignore then one must first build the
package to remove the .Rbuildignore cited files and then run off
the built files which were created by unpacking the tar.gz file.
Below we build the files in /wherever/mypkg and put the resulting
.tar.gz in the current diretory and then unpack it and run the
remaining commands off the unpacked directory (unlike the previous
case where we ran everything off the original directory, viz.
/wherever/mypkg):

... with .Rbuildignore ...

R CMD build /wherever/mypkg
tar xvzf mypkg_0.9-1.tar.gz

... now that we have created a new source directory, mypkg, without
... the .Rbuildignore files we can proceed by running off of that.
... Note that mypkg is different than /wherever/mypkg as the former
... does not contain the ignored files.
R CMD install mypkg
R CMD check mypkg
R CMD build mypkg --binary

I was writing a batch file to handle this second case for me
but was wondering if I was just duplicating work that has
already been done or can easily be done with existing makefiles
or somehow using configure.win.


: 
: 2. configure.win
: 
: Also I noticed reference to configure.win in the Extensions
: manual and am not sure if that can be used to address this.
: I tried creating a one line configure.win which just does
: an echo of a fixed string echo XYZ but I did not see XYZ
: during my R CMD install.
: 
: - Is configure.win supposed to be written using Windows batch commands
:   or UNIX-like commands?
: 
: It is run under sh, not the standard Windows interpreter, so
: UNIX-like.  
: 
: I just tried it with Rcmd install, and I did see the message echoed.
: I didn't see it for Rcmd build, because that doesn't build the
: package, it just puts it in a tarball, but Rcmd build --binary does
: run it.
: 
: 
: - where does it go.  (I tried putting it in the same directory as
:   my DESCRIPTION file.)
: 
: It should be in the same directory as DESCRIPTION.
: 
: - is it a suitable device for handling the .Rbuildignore issue?
: 
: What's the issue exactly?
: 
: Duncan Murd

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building Packages on Windows using .Rbuildignore

2004-11-17 Thread Gabor Grothendieck
Duncan Murdoch murdoch at stats.uwo.ca writes:

 
 On Wed, 17 Nov 2004 20:44:48 + (UTC), Gabor Grothendieck
 ggrothendieck at myway.com wrote :
 
 If one does not use .Rbuildignore then with a pure R package
 one can run off the original source files, in this case
 in /wherever/mypkg, like this:
 ...without .Rbuildignore ...
 R CMD install /wherever/mypkg
 R CMD check /wherever/mypkg
 R CMD build /wherever/mypkg
 R CMD build /wherever/mypkg --binary
 
 but if one DOES use .Rbuildignore then one must first build the
 package to remove the .Rbuildignore cited files and then run off
 the built files which were created by unpacking the tar.gz file.
 
 ... or what happens?  I still don't know exactly what you are claiming
 goes wrong.  

QUESTION

I am not claiming anything is wrong.  I was simply describing what
processing is needed to build a package that only uses R, noting that
such processing is different if one uses .Rbuildignore.  My question
was to ask:

- are there already scripts, makefiles or some other software or hooks
  such as configure.win to streamline this

EXAMPLE

For example, in
  http://tolstoy.newcastle.edu.au/R/help/04/11/6729.html
John Fox provided a batch file that automated package creation in the
case that there is only R code and one does not use .Rbuildignore.
I do use .Rbuildignore and need to do the analogous.   Most of my
past two posts described the manual commands that must be done in
each of these two cases to make clear what the process that I want
to streamline is and how it differs in these two cases.

DIFFERENCE BETWEEN USING .RBUILDIGNORE AND NOT

The reason that the processing is different according to whether one
uses .Rbuildignore or not is that 
  R CMD build 
takes the .Rbuildignore file into account but 
  R CMD install
  R CMD check
  R CMD build --binary 
do not take .Rbuildignore into account.  

 
 It might be that the behaviour should be changed, or maybe there's an
 existing workaround, but I'd like to see a clear statement of the
 problem first.

Certainly its desirable to streamline this whole situation with a 
makefile, script or hook or perhaps one could modify the individual
commands: R CMD install, R CMD check and R CMD build --binary to 
take .Rbuildignore into account, possibly via a switch/flag.  However,
all I was asking about is what is already there.

Thanks.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Questions on package creation

2004-11-11 Thread Gabor Grothendieck
Thanks.  One thing I would like clarified is the various locations
and what is relative to where.

If you could add to the example where we are in terms of an example
absolute path and where the various directories wind up also in
terms of example absolute paths it would clear up a lot.

Liaw, Andy andy_liaw at merck.com writes:

: 
: Gabor,
: 
: Here are my takes on these:
: 
: 1: Not that I know of.  Only the package developer(s) see the code this way,
: and is really not different from 3.  (A developer would not need 3, as R CMD
: check can be done with 1.)
: 
: 2  4: As Uwe said, the location is `whereever you like'.  You would create
: them only for distribution (i.e., installation on other machines), so it
: doesn't matter where it put them.  They only exist temporarily on the
: developer's machine (assuming the source is version controlled via other
: means), as only a user wanting to install the package would need the archive
: forms.
: 
: Here's the sequence I use:
: 
: Start with a source tree for the package somewhere.
: 
: Testing/checking:
: 
: Run R CMD INSTALL -l mytest mypkg (making sure you create the `mytest'
: directory first).  This installs the package in the `mytest' directory.  If
: the install is not successful, make necessary changes.  If successful, start
: up R (from the command line) and do library(mypkg, lib.loc=mytest) and run
: some tests (e.g., example() for functions in the package).
: 
: After making sure the installed package looks good, run R CMD check mypkg.
: Make changes to correct errors and warnings.
: 
: After successful check, run R CMD build --force mypkg to generate the source
: tarball and update the INDEX file.  (The --force is only needed if you
: add/remove/change help page topics since the last build.)
: 
: Run R CMD build --binary mypkg if you need to distribute the package in
: binary form.
: 
: HTH,
: Andy
: 
:  From: Gabor Grothendieck
:  
:  I have some questions about 
:  
: 1. nomenclature, 
: 2. recommended file locations and 
: 3. overall procedure related to creating packages.
:  
:  To the extent that it matters, examples here relate to Windows XP
:  R 2.0.1 beta.  
:  
:  The questions are interspersed and prefaced with ***.
:  
:  My understanding is that there are actually 6 forms of a package
:  that one should use in package development:
:  
:  1. original package.  This refers to the original source files, 
: documentation and other files that the author develops.  
: If source control, e.g. svn, is used then these are the files 
: that are under source control.  They are kept in some arbitrary 
: location on one's disk.  Let us say \usr\mypackage, for example.
:  
:  *** Is there some standard name for this form of the package?
:  
:  2. source archive.  This is created from the original package
: like this:
:  
:   cd \Program Files\rw2001beta
:   bin\R CMD build /usr/mypackage
:  
: which creates, say
:  
:   \Program Files\rw2001beta\mypackage_1.0-1.tar.gz
:  
: The source archive is distinct from the original archive since it
: is specific to a version of R and excludes the files referenced
: in \usr\mypackage\.Rbuildignore
:  
:  *** Is \Program Files\rw2001beta the correct place to put this
: .tar.gz file?
:  
:  3. source tree.  This is created from the gzipped tar archive in #2
: like this:
:  
:   cd \Program Files\rw2001beta
:   gzip -d mypackage_1.0-1.tar.gz
:   cd src\library
:   tar xvf ..\..\mypackage_1.0-1.tar
:  
: and is checked like this:
:  
:   cd \Program Files\rw2001beta
:   bin\R CMD check mypackage
:  
:  4. binary archive.  This is created from the source archive in #2
: or the source tree in #3:
:  
:cd \Program Files\rw2001beta
:bin\R CMD build mypackage --binary
:  
: which creates \Program Files\rw2001beta\myhpackage_1.0-1.zip
:  
:  *** Is \Program Files\rw2001beta the correct place to put this?
:  
:  5. installed package.  This installed by:
:  
:   cd \Program Files\rw2001beta
:   bin\R CMD install mypackage
:  
:which results in the source package being installed in:
:
:   \Program Files\rw2001beta\library\mypackage
:  
:This can alternately be done with the R GUI menu: 
:  
:  Packages | Install package(s) from local zip files
:  
:  6. loaded package.  In R using the command:
:  
:   library(mypackage)
:  
: loads the package into R.  This can alternately be done
: using the R GUI menu:
:  
:Packages | Load package
:  
:  One might initially skip #3 and #4 and just test the package out
:  in R after #6 and once one is satisfied that it is in good shape
:  repeat the sequence.
:  
:  *** Is all the above the correct and recommended sequence?
:  
:  *** Someone mentioned that --force is important.  How does that 
:  fit into all this?  I still have not used it and am not sure
:  about

Re: [Rd] inst/CITATION produced error during Rcmd check package

2004-11-11 Thread Gabor Grothendieck
Prof Brian Ripley ripley at stats.ox.ac.uk writes:

: That you are getting FIND not `find' suggests that you have not followed
: the instructions in README.packages correctly.  Please ensure you have
: followed


When I had this problem I found I had to put the path to the UNIX-like
tools ahead of everything in my PATH variable so that the UNIX-like find
was used rather than the Windows find.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Questions on package creation

2004-11-10 Thread Gabor Grothendieck

I have some questions about 

   1. nomenclature, 
   2. recommended file locations and 
   3. overall procedure related to creating packages.

To the extent that it matters, examples here relate to Windows XP
R 2.0.1 beta.  

The questions are interspersed and prefaced with ***.

My understanding is that there are actually 6 forms of a package
that one should use in package development:

1. original package.  This refers to the original source files, 
   documentation and other files that the author develops.  
   If source control, e.g. svn, is used then these are the files 
   that are under source control.  They are kept in some arbitrary 
   location on one's disk.  Let us say \usr\mypackage, for example.

*** Is there some standard name for this form of the package?

2. source archive.  This is created from the original package
   like this:

 cd \Program Files\rw2001beta
 bin\R CMD build /usr/mypackage

   which creates, say

 \Program Files\rw2001beta\mypackage_1.0-1.tar.gz

   The source archive is distinct from the original archive since it
   is specific to a version of R and excludes the files referenced
   in \usr\mypackage\.Rbuildignore

*** Is \Program Files\rw2001beta the correct place to put this
   .tar.gz file?

3. source tree.  This is created from the gzipped tar archive in #2
   like this:

 cd \Program Files\rw2001beta
 gzip -d mypackage_1.0-1.tar.gz
 cd src\library
 tar xvf ..\..\mypackage_1.0-1.tar

   and is checked like this:

 cd \Program Files\rw2001beta
 bin\R CMD check mypackage

4. binary archive.  This is created from the source archive in #2
   or the source tree in #3:

  cd \Program Files\rw2001beta
  bin\R CMD build mypackage --binary

   which creates \Program Files\rw2001beta\myhpackage_1.0-1.zip

*** Is \Program Files\rw2001beta the correct place to put this?

5. installed package.  This installed by:

 cd \Program Files\rw2001beta
 bin\R CMD install mypackage

  which results in the source package being installed in:
  
 \Program Files\rw2001beta\library\mypackage

  This can alternately be done with the R GUI menu: 

Packages | Install package(s) from local zip files

6. loaded package.  In R using the command:

 library(mypackage)

   loads the package into R.  This can alternately be done
   using the R GUI menu:

  Packages | Load package

One might initially skip #3 and #4 and just test the package out
in R after #6 and once one is satisfied that it is in good shape
repeat the sequence.

*** Is all the above the correct and recommended sequence?

*** Someone mentioned that --force is important.  How does that 
fit into all this?  I still have not used it and am not sure
about it.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] recursive default argument reference

2004-11-09 Thread Gabor Grothendieck
Vadim Ogranovich vograno at evafunds.com writes:

: 
: Hi,
: 
: It seems that a formal function argument can not default to an outer
: variable of the same name:
: 
:  x - foo
:  ff - function(x=x) x
:  ff()
: Error in ff() : recursive default argument reference
:  
: 
: Is this intentional? Why?
: 

Because then one arg could not safely refer to another. Here
x has y as its default value:

   x - 10
   y - 7
   f - function(x = y, y = 3) print(x)
   f() # prints 3

Note that you could do this:

   x - foo
   ff - function(x.=x) x.
   ff() # returns foo
   ff(x=44) # returns 44
   ff(55) # returns 55

For a real example look at merge.data.frame.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] creating a package without lazy loading

2004-11-07 Thread Gabor Grothendieck
Uwe Ligges ligges at statistik.uni-dortmund.de writes:

: 
: Gabor Grothendieck wrote:
:  
:  When I do a   
:  
:R CMD build --binary   
:  
:  then I get the messages at the end of this post unless I specify 
:  
:LazyLoad: no 
:  
:  in the DESCRIPRION file.  If I do that then everything 
:  proceeds smoothly.  R CMD check proceeds smoothly in
:  either case. Is there something I should be aware 
:  of that is causing this message when I do not turn off
:  lazy loading?
:  
:  I am using Windows XP and R 2.0.1 beta dated 2004-11-04.
:  
:  
:  Error in tools:::.read_description(file) :
:  file '/DESCRIPTION' does not exist
:  Execution halted
:  make: *** [lazyload] Error 1
:  *** Installation of zoo failed ***
:  
:  Removing 'C:/PROGRA~1/R/RW2001~1/library/zoo'
:  Restoring previous 'C:/PROGRA~1/R/RW2001~1/library/zoo'
: 
: My guess is that you used backslashes to specify the path to the 
: package, which seems to be handled incorretly. Try forward slashes instead!
: 
: I'll try to debug and comment on the  results in yet another thread
: Building a package under WIN2000 / rw2.0.
: 
: Uwe Ligges

Thanks!!!  The backslashes were indeed the problem.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] wishlist: building packages

2004-11-07 Thread Gabor Grothendieck

I have collected together some wishlist items from my
recent experience in package building.  Some of these
are also in other posts but I thought I would put them
in one place.  This all refers to Windows XP R 2.0.1 beta.

1. Would it be possible to put the whole package building 
process into R, at least for packages that do not use C,
so (a) one could leverage R's debugging facilities?  (b) The other
problem that this would address is that there are UNIX-like 
commands used by the scripts on Windows including find which conflicts 
with a Windows command.  Without scripts there would be no conflict.  
I had to change my PATH to put the UNIX-like tools ahead of the
Windows commands which I was really loathe to do.
(c) Also, it would presumably make the process more similar on 
different platforms.  It seems that like a significant portion of the 
package building process is already in R anyways.

2. The idea of R CMD check is nice.  One QA sort of feature
that would be nice would be to add some metrics to the output:
- number of top level functions, number of documented functions
- number of lines of code for suitable definition of lines
Since one normally runs check anyways it would be nice if 
getting this were just a side effect of running check so that
one does not need to issue additional commands.

3. On windows there can be problems when one uses backslashes
in path names, as noted in several recent posts.  Perhaps this 
could be addressed by either a warning, error or changing the 
process to accept backslashes.

4. In a recent post it was discussed that sometimes one gets
error messages referring to an intermediate R file but no 
message regarding where to find the file or what its name is.

5. It seems like there is quite a bit of information to digest
to build a package.  I found it quite overwhelming.  I guess
with 400+ packages enough people were able to overcome this but
it would still be nice if the prerequisites could be simplified
as much as possible.  I had successfullly built some small toy packages
before to convince myself I could do it but once faced with a
larger package (50-100 functions) I came up against problems that 
I had not previously seen and found I had to know more.

6. This is not an R problem, per se, but I probably spent about
an hour on trying to configure MikTeX to get it to work with
the build process.  There is probably nothing that can be done
as its already documented how to get around them but I hope
that MiKTeX gets some improvements here.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RFC install.packages() changes

2004-11-07 Thread Gabor Grothendieck
Prof Brian Ripley ripley at stats.ox.ac.uk writes:

: install/update.packages() in R-devel has been enhanced in a few ways,
: notably to handle multiple repositories, and to allow packages to be 
: reinstalled under the current version of R.

The change to updating that I would like would be to check before 
downloading whether the package is open.  Right now using the 
update menu item in Windows it checks afterwards 
and if its open fails which means that one has to do the download all
over again.  If it failed before downloading it would save some
wasted downloads.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RFC install.packages() changes

2004-11-07 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

: 
: Prof Brian Ripley ripley at stats.ox.ac.uk writes:
: 
: : install/update.packages() in R-devel has been enhanced in a few ways,
: : notably to handle multiple repositories, and to allow packages to be 
: : reinstalled under the current version of R.
: 
: The change to updating that I would like would be to check before 
: downloading whether the package is open.  Right now using the 
: update menu item in Windows it checks afterwards 
: and if its open fails which means that one has to do the download all
: over again.  If it failed before downloading it would save some
: wasted downloads.

While we are on this topic, when I choose Update from the Windows menu recently
on R 2.0.0 on XP I get the following which I have never seen before.
I am just ignoring these warning messages but am not sure if I should
take notice.

R update.packages()
trying URL `http://cran.r-project.org/bin/windows/contrib/2.0/PACKAGES'
Content type `text/plain; charset=iso-8859-1' length 22605 bytes
opened URL
downloaded 22Kb

Warning messages: 
1: DESCRIPTION file of package  'file2540'  missing or broken
 in: packageDescription(p, lib = lib, fields = pkgFlds) 
2: number of columns of result
not a multiple of vector length (arg 2) in: rbind(retval, c(p, lib, 
desc))

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist: better error message in R CMD check

2004-11-06 Thread Gabor Grothendieck
Liaw, Andy andy_liaw at merck.com writes:

 
 Gabor,
 
 I guess is that you did not try to run R CMD INSTALL before R CMD check.  R
 CMD check will try to install the package first (in pkg.Rcheck), and only if
 that's successful would checks be done.
 
 The installation process will concatenate all R files in R/ to a single file
 and essentially source() that in upon package loading.  That's where you
 would see the syntax error.  I believe the recommended way is to install the
 package and play with that a bit first, before doing R CMD check.  You'd
 find some problems are much easier to find that way (e.g., errors in
 NAMESPACE).

Thanks. Unfortunately if I run R CMD INSTALL
I get this:

[...]
  preparing package zoo for lazy loading
Error in tools:::.read_description(file) :
file '/DESCRIPTION' does not exist
Execution halted
make: *** [lazyload] Error 1
*** Installation of zoo failed ***

Removing 'C:/PROGRA~1/R/RW2001~1/library/zoo'

I get a similar message if I run R CMD build --binary .

The DESCRIPTION file definitely exists so I assume it to
be something else.  

If I run run R CMD check then it crashes (that is,
I get a popup asking me if I want to send a bug report to 
Microsoft) at various points depending on which example
code in the .Rd files I comment out).  If I comment out
enough to let it run past them all then I get the following
(where I have added the dots at the beginning of each
line in this post to circumvent's gmane's top posting filter):

. ### * runmean
.
. flush(stderr()); flush(stdout())
.
. ### Name: runmean
. ### Title: Running Means/Maximums/Medians
. ### Aliases: runmean runmax runmed
. ### Keywords: ts
.
. ### ** Examples
.
. # x.date - as.POSIXct(paste(2003-, rep(1:4, 4:1), -, sample(1:28, 10, 
replace = TRUE), sep =
))
. # x - zoo(rnorm(12), x.date)
.
. # runmean(x, 5)
. # runmax(x, 5)
. # runmed(x, 5)
.
.
.
. cleanEx(); ..nameEx - value

I assume its the cleanEx line that is the problem but that is
not one my lines.  I would have tried it with an earlier
version of R but I don't want to overwrite the version
of the package I have installed on R 2.0.0 since then I will
be unable to proceed at all.

I am using Windows XP and R 2.0.1beta .

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] install/build/build --binary

2004-11-06 Thread Gabor Grothendieck
I have question regarding package installation.

What is the difference between check, INSTALL, build and 
build --binary, which imply which and what order does one
normally perform them?  I have been trying:

R CMD build /mypackage
R CMD check /mypackage
R CMD build --binary /mypackage

in that order but wanted to check that this is right. 

Also, what portion of the check process can be run right from R rather
than from a script and assuming my package is in /mypackage and its
subdirectories what R command(s) from tools do I issue?  It occurs to
me that if I do it from R rather than from scripts I could turn on
debugging on the relevant tools command and more easily find out
what they are checking when things go bad to help me track down
my problems.

I am using Windows XP and R 2.0.1beta .

Thanks.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] install/build/build --binary

2004-11-06 Thread Gabor Grothendieck
Liaw, Andy andy_liaw at merck.com writes:

: 
:  From: Gabor Grothendieck
:  
:  I have question regarding package installation.
:  
:  What is the difference between check, INSTALL, build and 
:  build --binary, which imply which and what order does one
:  normally perform them?  I have been trying:
:  
:  R CMD build /mypackage
:  R CMD check /mypackage
:  R CMD build --binary /mypackage
:  
:  in that order but wanted to check that this is right. 
: 
: Here's my understanding:
: 
: R CMD build and R CMD check are nearly orthogonal.  R CMD build simply
: create the .tar.gz using the version info in the DESCRIPTION file.  The only
: part that's not orthogonal between the two (at least on Windows) is that R
: CMD build --binary first install the package in a temporary directory and
: then zip it up.
: 
: I would do:
: 
: 1. R CMD INSTALL -l testdir mypkg, and start R, load the package and try it
: out a bit.

Thanks.  This was quite useful.  I used the above line with the --no-lazy
switch and this allowed me to install it without crashing.  I still
have not figured out why it crashes otherwise but now that I can run
it I am on my way.

: 
: 2. R CMD check mypkg.  Make corrections as needed.
: 
: 3. R CMD build mypkg (or add the --binary flag) for distribution.
: 
: I believe R-exts has some explanations.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] creating a package without lazy loading

2004-11-06 Thread Gabor Grothendieck


When I do a   

  R CMD build --binary   

then I get the messages at the end of this post unless I specify 

  LazyLoad: no 

in the DESCRIPRION file.  If I do that then everything 
proceeds smoothly.  R CMD check proceeds smoothly in
either case. Is there something I should be aware 
of that is causing this message when I do not turn off
lazy loading?

I am using Windows XP and R 2.0.1 beta dated 2004-11-04.


Error in tools:::.read_description(file) :
file '/DESCRIPTION' does not exist
Execution halted
make: *** [lazyload] Error 1
*** Installation of zoo failed ***

Removing 'C:/PROGRA~1/R/RW2001~1/library/zoo'
Restoring previous 'C:/PROGRA~1/R/RW2001~1/library/zoo'

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] wishlist: \ or \0 in sub/gsub

2004-11-05 Thread Gabor Grothendieck

This is a wish list item.

It would be nice if one could use \0 or \ in sub and 
gsub as in vi:

gsub(an, :\\:, banana)

One can, of course, write the following right now:

gsub((an), :\\1:, banana)

But often the this is within a function and the an
is passed which means that one must use paste to
construct the (an) argument.  

Thus:

f - function(x,y) { 
x - paste((, x, ), sep = )
gsub(x, :\\1:, y)
}

simplifies to:

f - function(x,y) gsub(x, :\\:, y)

I have seen at least one post on r-help where this could have been
used to good effect to simplify the answer.

It would be neater if \ or \0 could be used instead.
Obviously this is not a huge issue but it would be nice.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] wishlist: better error message in R CMD check

2004-11-05 Thread Gabor Grothendieck
I was running R CMD check on Windows XP 2.0.1beta and 
got this:

Error in parse(file, n, text, prompt) : syntax error on 602

After a lot of aggravation I finally discovered that if I did
this:

copy *.R allofthem.R

and checked line 602 in allofthem.R that I could find the error.
I noticed that there are repeated references in the help archives
to this sort of error and how hard it is to locate it.  It 
certainly would be nice to tell the user which file the error
is in and to point to the original files, not some intermediate
file or if one must do it in terms of intermediate files to keep
the file around and tell the user which and where it is.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 2.0.1 countdown

2004-11-04 Thread Gabor Grothendieck
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:

: We plan to a patch release on Nov.15 to fix up various problems that
: cropped up after the release of R 2.0.0. Daily beta snapshots will be
: made available until the day of release. 
: 
: Please check if your favourite bugs have been fixed (but notice that

How does one easily find out if one's favorite bugs have been fixed?
Does the CHANGES file list them all or is there a more authoritative 
source?  I was specifically interested in the bug report and posting
I submitted regarding NextMethod.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


  1   2   >