I am the author of R package animint which uses testthat for unit tests.
This means that there is a single test file (animint/tests/testthat.R) and
during R CMD check we will see the following output
* checking tests ...
Running ‘testthat.R’
I run these tests on Travis, which has a policy that
Dear R-devel,
I am running mclapply with many iterations over a function that modifies
nothing and makes no copies of anything. It is taking up a lot of memory,
so it seems to me like this is a bug. Should I post this to
bugs.r-project.org?
A minimal reproducible example can be obtained by first
gt;
wrote:
> Toby,
>
> > On Sep 2, 2015, at 1:12 PM, Toby Hocking <tdho...@gmail.com> wrote:
> >
> > Dear R-devel,
> >
> > I am running mclapply with many iterations over a function that modifies
> > nothing and makes no copies of anything. It is
the garbage collector touches objects, as pointed out by Radford Neal
> here:
> http://r.789695.n4.nabble.com/Re-R-devel-Digest-Vol-149-Issue-22-td4710367.html
>
> If so, I don't think this would be easily avoidable, but there may be
> mitigation strategies.
>
> ~G
>
> On
I am getting the following on CRAN windows and winbuilder
https://www.r-project.org/nosvn/R.check/r-devel-windows-ix86+x86_64/penaltyLearning-00check.html
Apparently there is an error in re-building vignettes, but I do not have
any idea what it is, because all that is listed is three dots (...).
Hi all, (and especially hi to Tomas Kalibera who accepted my patch sent
yesterday)
I believe that I have found another bug, this time in the substring
function. The use case that I am concerned with is when there is a single
(character scalar) text/subject, and many substrings to extract. For
-substring-bug.R
To me this is a clear indication of a bug in substring, but again it would
be nice to have some feedback/confirmation before posting on bugzilla.
Also this suggests a fix -- just need to copy whatever stringi::stri_sub is
doing.
On Wed, Feb 20, 2019 at 11:16 AM Toby Hocking wrote
Hi all,
Several people have noticed that gregexpr is very slow for large subject
strings when perl=TRUE is specified.
-
https://stackoverflow.com/questions/31216299/r-faster-gregexpr-for-very-large-strings
-
Jan Gorecki wrote:
> Hi Toby,
> AFAIK it has not been addressed in R. You can handle the problem on
> your package side, see
> https://github.com/Rdatatable/data.table/pull/3237
> Regards,
> Jan
>
>
> On Thu, May 30, 2019 at 4:46 AM Toby Hocking wrote:
> >
>
the
>
>error: "make_RAW_from_NA_LLINT" not available for .Call() for package
> "S4Vectors"
>
> later on when trying to load the package.
>
> Cheers,
> H.
>
>
> On 5/30/19 16:31, Toby Hocking wrote:
> > thanks for the tip Jan.
> >
>
If anybody else has this issue, please add a comment on
https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17478 so we are more
likely to get R-core to address this.
Thanks
Toby
On Tue, Jun 4, 2019 at 2:58 PM Pages, Herve wrote:
> On 5/31/19 08:41, Toby Hocking wrote:...
> > In m
Hi all,
I am having an issue related to installing packages on windows with
R-3.6.0. When installing a package that is in use, I expected R to stop
with an error. However I am getting a warning that the DLL copy was not
successful, but the overall package installation IS successful. This is
quite
Hi all,
Today I had an R CMD build that failed while building a vignette because
the vignette needs tidyr (>= 1.0, declared in DESCRIPTION Suggests) but my
system had a previous version installed.
It did not take me too long to figure out the issue (solved by upgrading
tidyr) but it would have
Hi R-core,
I have been performance testing R packages for wide-to-tall data reshaping
and for the most part I see they differ by constant factors.
However in one test, which involves converting into multiple output
columns, I see that stats::reshape is in fact quadratic in the number of
input
hi there, thanks for the feedback, sorry about the cross-posting, and that
makes sense given the nojss option, which I was not aware of.
On Wed, Jan 8, 2020 at 9:16 AM Achim Zeileis
wrote:
> On Wed, 8 Jan 2020, Iñaki Ucar wrote:
>
> > On Wed, 8 Jan 2020 at 19:21, Toby Ho
Hi R-core, I was wondering if somebody could please add jsslogo.jpg to the
R sources? (as I reported yesterday in this bug)
https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17687
R already includes jss.cls which is the document class file for Journal of
Statistical Software. Actually, for the
Can someone please add documentation for that environment variable to
Writing R Extensions? An appropriate place would be section
https://cloud.r-project.org/doc/manuals/r-release/R-exts.html#Suggested-packages
which already discusses _R_CHECK_DEPENDS_ONLY_=true
[[alternative HTML version
See at https://cran.r-project.org/doc/manuals/r-devel/R-ints.html#Tools
>>
>> Gabor
>>
>> On Wed, May 13, 2020 at 7:05 PM Toby Hocking wrote:
>> >
>> > Can someone please add documentation for that environment variable to
>> > Writing R Extensions? An
Hi the reference to R Internals
https://cran.r-project.org/doc/manuals/r-release/R-ints.html#Tools
in ?check (PkgUtils.Rd in utils package) is stale. Here is my proposed
patch (use named reference rather than numeric reference to avoid any
similar broken links in the future).
Index:
Hi win-builder certificate expired on Aug 15. My student on the other side
of the world is also seeing this problem so I think it needs to be fixed...
> download.file("https://win-builder.r-project.org;, "/tmp/wb.html")
trying URL 'https://win-builder.r-project.org'
Error in
WRE explains for C++11 14 etc standards but I don't know about C
https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Using-C_002b_002b11-code
BTW I believe this question would be more appropriate for R-package-devel.
On Mon, Sep 28, 2020 at 4:44 AM Andreas Kersting
wrote:
> Hi,
>
>
Hi Luke,
I just wanted to say thanks for taking the time to add this tag. That is
very helpful to know which bugs are worth working on and need help. Keep up
the good work!
Toby
On Wed, Aug 5, 2020 at 7:23 AM wrote:
> Just a quick note to mention that we have added a HELPWANTED keyword
> on
Hi all,
I'm getting the following error from substring:
> substr("Jens Oehlschl\xe4gel-Akiyoshi", 1, 100)
Error in substr("Jens Oehlschl\xe4gel-Akiyoshi", 1, 100) :
invalid multibyte string at 'gel-A<6b>iyoshi'
Is that normal / intended? I've tried setting the Encoding/locale to
Latin-1/UTF-8
wrote:
> On Fri, 26 Jun 2020 15:57:06 -0700
> Toby Hocking wrote:
>
> >invalid multibyte string at 'gel-A<6b>iyoshi'
>
> >https://stat.ethz.ch/pipermail/r-devel/1999-November/author.html
>
> The server says that the text is UTF-8:
>
> curl -sI \
>
mtests/README.txt . Set
> suppressions in ~/.valgrindrc, e.g. the CRAN check machine has
>
> --suppressions=/data/blackswan/ripley/wcsrtombs.supp
>
> It is an issue in your OS (glibc), not TRE nor R.
>
> On 10/06/2020 00:21, Toby Hocking wrote:
> > Hi all,
> >
> >
Hi all,
I'm on Ubuntu 18.04, running R-4.0.0 which I compiled from source, and
using valgrind I am always seeing the following message. Does anybody
else see that? Is that a known false positive? Any ideas how to
fix/suppress? Seems related to TRE, do I need to upgrade that?
(base)
By the way, where is the documentation for INTEGER_ELT, REAL_ELT, etc? I
looked in Writing R Extensions and R Internals but I did not see any
mention.
REAL_ELT is briefly mentioned on
https://svn.r-project.org/R/branches/ALTREP/ALTREP.html
Would it be possible to please add some mention of them to
_t i, SEXP v);
>
> So the indexing is with R_xlen_t and they return the value itself as one
> would expect.
>
> Cheers,
> Simon
>
>
> > On Jun 17, 2021, at 2:22 AM, Toby Hocking wrote:
> >
> > By the way, where is the documentation for INTEGER_ELT, REAL_ELT,
Hi all, Today I noticed bug(s?) in R-4.0.5, which seem to be fixed in
R-devel already. I checked on
https://developer.r-project.org/blosxom.cgi/R-devel/NEWS and there is no
mention of these changes, so I'm wondering if they are intentional? If so,
could someone please add a mention of the bugfix
d prefer some stricter checks of strings validity and
> perhaps disallowing the "C" encoding in R, so yet another behavior where
> it would be clearer that this cannot really work, but that would require
> more thought and effort.
>
> Best
> Tomas
>
>
> On 4/27/21 9
at 9:04 AM Tomas Kalibera
wrote:
>
> On 4/28/21 5:22 PM, Martin Maechler wrote:
> >>>>>> Toby Hocking
> >>>>>> on Wed, 28 Apr 2021 07:21:05 -0700 writes:
> > > Hi Tomas, thanks for the thoughtful reply. That makes sense about
>
gt;
> > Following Toby's argument, it's clear to me: the first and the last.
> >
> > Iñaki
> >
> > > (in the sense of is.atomic returning \code{TRUE})" in front of
> "vectors"
> > > or similar where what types of objects are supported seems justified,
&g
t
> equivalent to the conceptual task na.omit is doing, in my opinion, as
> illustrated by what the data.frame method does.
>
> Thus what i was getting at above about it not being clear that lst[is.na(lst)]
> being the correct thing for na.omit to do
>
> ~G
>
> ~G
>
>
imilar where
> what types of objects are supported seems justified, though, imho, as the
> current documentation is either ambiguous or technically incorrect,
> depending on what we take "vector" to mean.
>
> Best,
> ~G
>
> On Wed, Aug 11, 2021 at 10:16 PM Toby Hocking wr
= chr "AsIs"
> is.na(f)
L
[1,] FALSE
[2,] TRUE
[3,] FALSE
> na.omit(f)
L
1
2 NA
3 0
On Wed, Aug 11, 2021 at 9:58 PM Toby Hocking wrote:
> na.omit is documented as "na.omit returns the object with incomplete cases
> removed." and "At present these will hand
na.omit is documented as "na.omit returns the object with incomplete cases
removed." and "At present these will handle vectors," so I expected that
when it is used on a list, it should return the same thing as if we subset
via is.na; however I observed the following,
> L <- list(NULL, NA, 0)
>
BTW this is documented here
http://pcre.org/current/doc/html/pcre2api.html#infoaboutpattern with a
helpful example, copied below.
As a simple example of the name/number table, consider the following
pattern after compilation by the 8-bit library (assume PCRE2_EXTENDED
is set, so white space -
Hi Michael, it sounds like you don't want to use a CRAN package for
this, but you may try re2, see below.
> grepl("(invalid","subject",perl=TRUE)
Error in grepl("(invalid", "subject", perl = TRUE) :
invalid regular expression '(invalid'
In addition: Warning message:
In grepl("(invalid",
Another option is to use https://emacspeak.sourceforge.net/ (version of
emacs editor/ide which can speak letters/words/lines -- has a blind
maintainer) with https://ess.r-project.org/ (interface for editing and
running R code from within emacs)
On Thu, Sep 22, 2022 at 9:42 AM Duncan Murdoch
Hi Aidan, I think you are on the right email list.
I'm not R-core, but this looks like an interesting/meaningful/significant
contribution to base R.
I'm not sure what the original dendrapply looks like in terms of code style
(variable names/white space formatting/etc) but in my experience it is
Dear R-devel,
A number of people have observed anecdotally that read.csv is slow for
large number of columns, for example:
https://stackoverflow.com/questions/7327851/read-csv-is-extremely-slow-in-reading-csv-files-with-large-numbers-of-columns
I did a systematic comparison of read.csv with
Dear R-devel,
I did a systematic comparison of write.csv with similar functions, and
observed two asymptotic inefficiencies that could be improved.
1. write.csv is quadratic time (N^2) in the number of columns N.
Can write.csv be improved to use a linear time algorithm, so it can handle
CSV files
wing a very
> nice substring approach that I've seen implemented by Toby Hocking
> in the nc package - nc::capture_first_vec).
>
> strcapture2 <- function(pattern, x, proto, perl = FALSE, useBytes = FALSE) {
> if (isTRUE(perl)) {
> m <- regexpr(pattern
Hi Hilmar and Ivan,
I have used your code examples to write a blog post about this topic,
which has figures that show the asymptotic time complexity of the
various approaches,
https://tdhock.github.io/blog/2023/df-partial-match/
The asymptotic complexity of partial matching appears to be quadratic
Hi Hilmar and Ivan,
I have used your code examples to write a blog post about this topic,
which has figures that show the asymptotic time complexity of the
various approaches,
https://tdhock.github.io/blog/2023/df-partial-match/
The asymptotic complexity of partial matching appears to be quadratic
My opinion is that the proposed feature would be greatly appreciated by users.
I had always wondered if I was the only one doing paste(readLines(f),
collapse="\n") all the time.
It would be great to have the proposed, more straightforward way to
read the whole file as a string:
Hi Dirk thanks for the helpful response.
On Wed, Oct 24, 2018 at 5:09 AM Dirk Eddelbuettel wrote:
>
> On 23 October 2018 at 14:02, Toby Hocking wrote:
> | I would like to put the https://github.com/tdhock/PeakSegDisk package on
> | CRAN. This package needs Berkeley DB C++ Stand
Hi all,
is there a known fix for this WARNING which I am getting on solaris for my
newly submitted nc package?
https://www.r-project.org/nosvn/R.check/r-patched-solaris-x86/nc-00check.html
A quick google search for "it is not known that wchar_t is Unicode on this
platform cran" shows that many
tdata", "file_with_unicode.txt",
package="yourPkg"))
On Wed, Sep 25, 2019 at 11:45 AM Tomas Kalibera
wrote:
> On 9/25/19 7:59 PM, Toby Hocking wrote:
>
> Hi Tomas thanks for the explanation. Does that mean that there is no known
> fix? i.e. it is OK to re-submit a new version of
Hi Tomas thanks for the explanation. Does that mean that there is no known
fix? i.e. it is OK to re-submit a new version of my package without fixing
these WARNINGS?
On Tue, Sep 24, 2019 at 1:38 AM Tomas Kalibera
wrote:
> On 9/24/19 1:57 AM, Toby Hocking wrote:
> &g
Hi Naras
I had a similar issue recently with
https://cloud.r-project.org/web/packages/nc/ --- it Suggests: re2r which is
a package that is no longer on CRAN, but available on github. To solve the
issue I just copied the re2r into a drat repo, which is a CRAN-like
repository hosted on github. More
Hi Tomas, these are really helpful suggestions, which are not at all
discussed in the current Writing R Extensions, nor in CRAN Repository
Policy. Would you please consider adding this information to one of these
official sources of documentation?
On Fri, May 14, 2021 at 3:59 AM Tomas Kalibera
Thanks Dirk. I agree.
data.table is not in a situation to update very soon, so the easiest
solution for the R community would be for CRAN to set OMP_THREAD_LIMIT
to 2 on the Windows and Debian machines doing this test.
Otherwise the 1400+ packages with hard dependencies on data.table will
each
53 matches
Mail list logo