Paul, this is being worked on. As you can imagine testing over 17,000 package in a M1 Mac mini isn't quite trivial. The first priority was to get the nightly R builds to work. Second was to get CRAN package builds to work. Third is to provide checks. The first two have finished last week and the checks have been running for the past two days. Unfortunately, some pieces (like XQuartz) are still not quite stable so it takes more manual interventions than one would expect. We are at close to 16k packags checked, so we're getting there.
As for EvalEst the check has finished so I have: Running ‘dse2tstgd2.R’ [13s/14s] Running the tests in ‘tests/dse2tstgd2.R’ failed. Last 13 lines of output: > ok <- fuzz.large > error > if (!ok) {if (is.na(max.error)) max.error <- error + else max.error <- max(error, max.error)} > all.ok <- all.ok & ok > {if (ok) cat("ok\n") else cat("failed! error= ", error,"\n") } ok > > cat("All Brief User Guide example tests part 2 completed") All Brief User Guide example tests part 2 completed> if (all.ok) cat(" OK\n") else + cat(", some FAILED! max.error = ", max.error,"\n") , some FAILED! max.error = 1.065814e-14 > > if (!all.ok) stop("Some tests FAILED") Error: Some tests FAILED Execution halted when I run it by hand I get ok for all but: Guide part 2 test 10... failed! error= 1.065814e-14 > sum(fc1$forecastCov[[1]]) [1] 14.933660144821400806 > sum(fc2$forecastCov[[1]]) [1] 14.933660144821400806 > sum(fc2$forecastCov.zero) [1] 31.654672476928304548 > sum(fc2$forecastCov.trend) [1] 18.324461923341957004 > c(14.933660144821400806 - sum(fc1$forecastCov[[1]]), + 14.933660144821400806 - sum(fc2$forecastCov[[1]]), + 31.654672476928297442 - sum(fc2$forecastCov.zero), + 18.324461923341953451 - sum(fc2$forecastCov.trend) ) [1] 0.0000000000000000000e+00 0.0000000000000000000e+00 -1.0658141036401502788e-14 [4] -3.5527136788005009294e-15 I hope this helps you to track it down. Cheers, Simon > On Mar 1, 2021, at 4:50 AM, Paul Gilbert <pgilbert...@gmail.com> wrote: > > If there was a response to the "how can I test it out" part of this question > then I missed it. Can anyone point to a Win-builder like site for testing on > M1mac, or to the M1mac results from testing packages already on CRAN? They > still do not seem to be on the CRAN daily site. Even a link to the > 'Additional issues' on M1 Mac on the results pages would be helpful because > it does not seem to be in an obvious place. I am trying to respond to a > demand to relax or remove some package testing that fails because M1mac gives > results outside my specified tolerances. > > The tests in question (in package EvalEst) have been used since very early R > versions (0.16 circa 1995), and used on Splus prior to that. There has been a > need to adjust tolerances occasionally, but they have been stable for a long > time (more than 20 years I believe). Since these tests date from a time when > simple double precision was the norm, the tolerances are already fairly > relaxed so I hesitate to adjust them with actually examining the results. > > Paul Gilbert > > On 2021-02-22 3:30 a.m., Travers Ching wrote: >> I noticed CRAN is now doing checks against Apple M1, and some packages are >> failing including a dependency I use. >> Is building on M1 now a requirement, or can the check be ignored? If it's a >> requirement, how can one test it out? >> Travers >> [[alternative HTML version deleted]] >> ______________________________________________ >> R-devel@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel >> > > ______________________________________________ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel