Please forgive any mis-post, and do feel free to point me to a more
appropriate list if this isn't properly R-dev.
I have a package on R-forge that shows correct linux and other *nix
builds, but no windows build. The log for the patched version shows the
error below, which appears to be due to a
Please forgive any mis-post, and do feel free to point me to a more
appropriate list if this isn't properly R-dev.
I have a package on R-forge that shows correct linux and other *nix
builds, but no windows build. The log for the patched version shows the
error below, which appears to be due to a
you what the R-forge log told me.
Steve Ellison
Prof Brian Ripley rip...@stats.ox.ac.uk 28/03/2011 05:32
On Mon, 28 Mar 2011, S Ellison wrote:
Please forgive any mis-post, and do feel free to point me to a more
appropriate list if this isn't properly R-dev.
I have a package on R-forge
This seems trivially fixable using something like
median.data.frame - function(x, na.rm=FALSE) {
sapply(x, function(y, na.rm=FALSE) if(is.factor(y)) NA else
median(y, na.rm=na.rm), na.rm=na.rm)
}
Paul Johnson pauljoh...@gmail.com 28/04/2011 06:20
On Wed, Apr 27, 2011 at 12:44 PM, Patrick
Further apologies to the list, but emails are still not getting to folk.
Duncan, you should have had a diff from me yesterday - if not, they've
fouled it up again...
--
View this message in context:
I had the same normalizePath error recently on a new laptop, with a fresh
install of R 2.8.1 and an attempt to install lme4. First attempt:
package 'Matrix' successfully unpacked and MD5 sums checked
Error in normalizePath(path) :
path[1]: The system cannot find the file specified
Second
Some of us (including me) have strongly argued on several
occasions that global options() settings should *not* have an
effect
on anything computing ...
...
Global options are less of a problem where a function allows them to be
overridden by the user or programmer. If something is affected
c() should have been put on the deprecated list a couple
of decades ago
Don't you dare!
Back to reality
phew! had me worried there.
c() is no problem at all for lists, Dates and most simple vector types;
why deprecate something solely because it doesn't behave for something
it doesn't claim
I was working on a permutation-like variant of the bootstrap for smaller
samples, and wanted to be able to get summary stats of my estimator
conveniently. mean() is OK as its a generic, so a mean.oddboot function gets
used automatically. But var, sd and others are not originally written as
Brian,
If we make functions generic, we rely on package writers implementing the
documented
semantics (and that is not easy to check). That was deemed to be too
easy to get wrong for var().
Hard to argue with a considered decision, but the alternative facing increasing
numbers of package
Boxplot and bxp seem to have changed behaviour a bit of late (R 2.4.1). Or
maybe I am mis-remembering.
An annoying feature is that while at=3:6 will work, there is no way of
overriding the default xlim of 0.5 to n+0.5. That prevents plotting boxes on,
for example, interval scales - a useful
What is mystifying is that the issue was not present in previous versions, =
so appropriate code already existed.
However, I agree that there seem to be a couple of additional issues that =
I had missed. =20
I am perfectly happy to look at this again myself, though, and provide =
extended code;
Brian,
Note that ?bxp quite carefully says which graphical pars it does and does
not accept, and 'xlim' is one it does not accept.
In my version at the time, bxp did not list which plot parameters it does not
accept. xlim was simply not mentioned at all. I can't easily see lack of a
mention
CO2 is apparently a groupedData object; the formula attribute is described by
Pinheiro and Bates as a 'display formula'.
Perhaps reference to the nlme package's groupedData help would be informative?
Gabor Grothendieck [EMAIL PROTECTED] 16/07/2007 16:18:37
Yes. That's what I was referring
Is it possible to keep from triggering the following warning
when I check the package?
summary:
function(object, ...)
summary.agriculture:
function(x, analyte.names, results.col, analyte.col, by, det.col,
[clip]
Part of the solution is to add ... to the legacy function; that is
of the world's best
statisticians and a system entirely under their control does not guarantee an
unambiguous count.
Anyone out there still think statistics are easy?
Even so, 4000 plus or minus a few says a great deal for the R project's impact
S Ellison
*with limited accuracy but some entertainment
-project.org/web/packages/policies.html for FTP upload?
S Ellison
***
This email and any attachments are confidential. Any use...{{dropped:8}}
__
R-devel@r-project.org mailing list
https
a
clash with an existing package name but a clash with an existing version
number. Probable explanation: I have failed to update my DESCRIPTION file.
Apologies for wasting folks' time. Again.
S Ellison
***
This email and any attachments
Rather than transport quantities of the Introduction to R (a perfectly
sensible title for a very good starting point, IMHO) would it not be
simpler and involve less maintenance to include a link or
cross-reference in the 'formula' help page to the relevant part of the
Introduction? If nothing
Plaintive squeak: Why the change?
Some OS's and desktops use the extension, so forgetting it causes
trouble. The new default filename keeps a filetype (as before) but the
user now has to type a filetype twice (once as the type, once as
extension) to get the same effect fo rtheir own filenames.
Michael Prager [EMAIL PROTECTED] 06/04/08 4:28 AM
There is much to be said for consistency (across platforms and
functions) and stability (across versions) in software.
I could not agree more. But while consistency is an excellent reason
for making the patch consistent across platforms, it
default were suggestions for possible future tidying-up
in subsequent releases.
Steve E
S Ellison [EMAIL PROTECTED] 06/04/08 12:44 PM
Michael Prager [EMAIL PROTECTED] 06/04/08 4:28 AM
There is much to be said for consistency (across platforms and
functions) and stability (across versions
?text says
'adj' allows _adj_ustment of the text with respect to '(x,y)'.
Values of 0, 0.5, and 1 specify left/bottom, middle and
right/top,
respectively.
But it looks like 0, 1 specify top, bottom respectively in the y
direction.
plot(1:4)
text(2,2, adj=c(0,0), adj=c(0,0))
Yup; you're all right - it IS consistent (and I'd even checked the x-adj
and it did what I expected!!). It's just that ?text is talking about the
position of the 'anchor' point in the text region rather than the
subsequent location of the centre of the text.
Anyway; if anyone is considering a
of prior weights the scaled residuals are expected to
be IID Normal (under the normality assumption for a linear model) and without
scaling they aren't IID, so a Q-Q plot would be meaningless without scaling.
S Ellison
***
This email
Transferred from R-help:
From: S Ellison
Subsetting using subset() is perhaps the most natural way of
subsetting data frames; perhaps a line or two and an example could
usefully be included in the 'Working with data frames' section of the R
Intro?
From: Bert Gunter [mailto:gunter.ber
and that
removes the above Note*, but I then get a Note warning of maintainer change.
Is either Note going to get in the way of CRAN submission? (And if one of them
will, which one?)
S Ellison
*A minor aside: I couldn't find any documented reason for that, or indeed any
restriction on the format
or dependency lists is a good idea that individual package
maintainers could relatively easily manage, but I think freezing CRAN as a
whole or adopting single release cycles for CRAN would be thoroughly
impractical.
S Ellison
This seems to be a common approach in other packages. However, one of my
testers noted that if he put formula=y~. then w, ID, and site showed up in the
model where they weren't supposed to be.
This is the documented behaviour for '.' in a formula - it means 'everything
else in the data
to at least look at and either
to fix or to at least warn of in documentation.
S Ellison
***
This email and any attachments are confidential. Any use, copying or
disclosure other than by the intended recipient is unauthorised
'.
Whether that is inconsistent is something of a matter of perspective.
Simplification applied as far as possible will always depend on what
simplification is possible for the particular return values, so different
return values provide different behaviour.
S Ellison
so.
So ask before removing someone from your citation. If they say 'no', don’t
remove them.
S Ellison
***
This email and any attachments are confidential. Any use, copying or
disclosure other than by the intended recipient is
> > Is there a point at which the original developer should not stay on the
> > author
> list?
>
> Authorship is not just about code. For example, there are functions in R
> which
> have been completely recoded, but the design and documentation remain.
> Copyright can apply to designs and
ticularly one that starts small but is
subsequently refactored in growth - there may be no code left that was
contributed by the original developer.
Is there a point at which the original developer should not stay on the author
list?
' section of the help page says
" A numeric or complex array of suitable size, or a vector if the
result is one-dimensional. For the first four functions the
'dimnames' (or 'names' for a vector result) are taken from the
original array
er a single logical (that could
be used with 'if') nor a vector that can be readily replicated to the desired
length with 'rep'?
If not, I'd drop the attempt to generate new ifelse-like functions.
S Ellison
***
This email and a
ut that's a personal opinion. If these really are serious issues, somebody
needs to work up a consistent policy for R projects; otherwise we'll all be
walking on eggshells.
S Ellison
***
This email and any attachments are confide
t; either the generic or the class.
>
> I agree... (and typically it does "own" the class)
If that is true and a good general guide, is it worth adding something to that
effect to 1.5.2 of "Writing R extensions"?
At present, nothing in 1.5.2 requires or recommends
intain in the long term.
So if you want to do something that will readily convert all combinations of
things like '12 w', '12W', '12wks', '3m 2d', 1wk 2d', '18d' etc, write that as
a stand-alone routine that converts those 'simple' formats directly to difftime
objects and call i
ed library version of a
function rather than (say) a 'source'd user-space version that is under
development.
Being non-specific (ie omitting foo:::) means that test code would pick up the
development version in the current user environment by default. That's handy
for small
> From: Thomas Yee [mailto:t@auckland.ac.nz]
>
> Thanks for the discussion. I do feel quite strongly that
> the variables should always be a part of a data frame.
This seems pretty much a decision for R core, and I think it's useful to have
raised the issue.
But I, er, feel strongly
FWIW, before all the examples are changed to data frame variants, I think
there's fairly good reason to have at least _one_ example that does _not_ place
variables in a data frame.
The data argument in lm() is optional. And there is more than one way to manage
data in a project. I personally
> > plot(x=1:10, y=)
> > plot(x=1:10, y=, 10:1)
> >
> > In both cases, 'y=' is ignored. In the first, the plot is for y=NULL (so not
> 'missing' y)
> > In the second case, 10:1 is positionally matched to y despite the
> > intervening
> 'missing' 'y='
> >
> > So it isn't just 'missing'; it's
> When trying out some variations with `[.data.frame` I noticed some (to me)
> odd behaviour,
Not just in 'myfun' ...
plot(x=1:10, y=)
plot(x=1:10, y=, 10:1)
In both cases, 'y=' is ignored. In the first, the plot is for y=NULL (so not
'missing' y)
In the second case, 10:1 is positionally
> Yes, I think all of that is correct. But y _is_ missing in this sense:
> > plot(1:10, y=)
> > ...
> Browse[2]> missing(y)
Although I said what I meant by 'missing' vs 'not present', it wasn't exactly
what missing() means. My bad.
missing() returns TRUE if an argument is not specified in the
> As far as I can tell, the manual help page for ``sd``
>
> ?sd
>
> does not explicitly mention that the formula for the standard deviation is
> the so-called "Bessel-corrected" formula (divide by n-1 rather than n).
See Details, where it says
"Details:
Like 'var' this uses denominator n
to be fixed for _some_, or
doesn't need to be changed (the present version on cran is running without
apparent issues, but was submitted before the checks expanded to pick this up).
S Ellison
***
This email and any attachments
> I have googled 'piecewise' and it seems to be
> widely used with that spelling.
... because that is the only correct spelling of 'piecewise' in English...
Butu I see Duncan Murdoch has explained why it was flagged; not all English
words appear in all English dictionaries.
In fact, not all
48 matches
Mail list logo