I'm wondering if we should change the order of publishing next time.
Although it's not announced, we already have uploaded artifacts for (1),
(2), (3).

1. Download: https://www-us.apache.org/dist/spark/spark-2.4.0/
2. Maven:
https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.12/2.4.0
3. PySpark: https://pypi.org/project/pyspark/2.4.0/

Bests,
Dongjoon.


On Mon, Nov 5, 2018 at 7:22 PM Sean Owen <sro...@gmail.com> wrote:

> What can we do to get the release through? is there any way to
> circumvent these tests or otherwise hack it? or does it need a
> maintenance release?
> On Mon, Nov 5, 2018 at 8:53 PM Felix Cheung <felixcheun...@hotmail.com>
> wrote:
> >
> > FYI. SparkR submission failed. It seems to detect Java 11 correctly with
> vignettes but not skipping tests as would be expected.
> >
> > Error: processing vignette ‘sparkr-vignettes.Rmd’ failed with
> diagnostics:
> > Java version 8 is required for this package; found version: 11.0.1
> > Execution halted
> >
> > * checking PDF version of manual ... OK
> > * DONE
> > Status: 1 WARNING, 1 NOTE
> >
> > Current CRAN status: ERROR: 1, OK: 1
> > See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>
> >
> > Version: 2.3.0
> > Check: tests, Result: ERROR
> >     Running ‘run-all.R’ [8s/35s]
> >   Running the tests in ‘tests/run-all.R’ failed.
> >   Last 13 lines of output:
> >     4:
> callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
> "fit", formula,
> >            data@sdf, tolower(family$family), family$link, tol,
> as.integer(maxIter), weightCol,
> >            regParam, as.double(var.power), as.double(link.power),
> stringIndexerOrderType,
> >            offsetCol)
> >     5: invokeJava(isStatic = TRUE, className, methodName, ...)
> >     6: handleErrors(returnStatus, conn)
> >     7: stop(readString(conn))
> >
> >     ══ testthat results
> ═══════════════════════════════════════════════════════════
> >     OK: 0 SKIPPED: 0 FAILED: 2
> >     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
> >     2. Error: spark.glm and predict (@test_basic.R#58)
> >
> >
> >
> > ---------- Forwarded message ---------
> > Date: Mon, Nov 5, 2018, 10:12
> > Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.4.0
> >
> > Dear maintainer,
> >
> > package SparkR_2.4.0.tar.gz does not pass the incoming checks
> automatically, please see the following pre-tests:
> > Windows: <
> https://win-builder.r-project.org/incoming_pretest/SparkR_2.4.0_20181105_165757/Windows/00check.log
> >
> > Status: 1 NOTE
> > Debian: <
> https://win-builder.r-project.org/incoming_pretest/SparkR_2.4.0_20181105_165757/Debian/00check.log
> >
> > Status: 1 WARNING, 1 NOTE
> >
> > Last released version's CRAN status: ERROR: 1, OK: 1
> > See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>
> >
> > CRAN Web: <https://cran.r-project.org/package=SparkR>
> >
> > Please fix all problems and resubmit a fixed version via the webform.
> > If you are not sure how to fix the problems shown, please ask for help
> on the R-package-devel mailing list:
> > <https://stat.ethz.ch/mailman/listinfo/r-package-devel>
> > If you are fairly certain the rejection is a false positive, please
> reply-all to this message and explain.
> >
> > More details are given in the directory:
> > <
> https://win-builder.r-project.org/incoming_pretest/SparkR_2.4.0_20181105_165757/
> >
> > The files will be removed after roughly 7 days.
> >
> > No strong reverse dependencies to be checked.
> >
> > Best regards,
> > CRAN teams' auto-check service
> > Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
> > Check: CRAN incoming feasibility, Result: NOTE
> >   Maintainer: 'Shivaram Venkataraman <shiva...@cs.berkeley.edu>'
> >
> >   New submission
> >
> >   Package was archived on CRAN
> >
> >   Possibly mis-spelled words in DESCRIPTION:
> >     Frontend (4:10, 5:28)
> >
> >   CRAN repository db overrides:
> >     X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
> >       corrected despite reminders.
> >
> > Flavor: r-devel-linux-x86_64-debian-gcc
> > Check: re-building of vignette outputs, Result: WARNING
> >   Error in re-building vignettes:
> >     ...
> >
> >   Attaching package: 'SparkR'
> >
> >   The following objects are masked from 'package:stats':
> >
> >       cov, filter, lag, na.omit, predict, sd, var, window
> >
> >   The following objects are masked from 'package:base':
> >
> >       as.data.frame, colnames, colnames<-, drop, endsWith,
> >       intersect, rank, rbind, sample, startsWith, subset, summary,
> >       transform, union
> >
> >   trying URL '
> http://mirror.klaus-uwe.me/apache/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
> '
> >   Content type 'application/octet-stream' length 227893062 bytes (217.3
> MB)
> >   ==================================================
> >   downloaded 217.3 MB
> >
> >   Quitting from lines 65-67 (sparkr-vignettes.Rmd)
> >   Error: processing vignette 'sparkr-vignettes.Rmd' failed with
> diagnostics:
> >   Java version 8 is required for this package; found version: 11.0.1
> >   Execution halted
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>

Reply via email to