wangyum commented on PR #37983:
URL: https://github.com/apache/spark/pull/37983#issuecomment-1256327547

   This is RC1 logs:
   ```
   tail -n 500 binary-release-hadoop3.log
   
   Running CRAN check with --as-cran --no-tests options
   * using log directory 
‘/opt/spark-rm/output/spark-3.3.1-bin-hadoop3/R/SparkR.Rcheck’
   * using R version 4.2.1 (2022-06-23)
   * using platform: x86_64-pc-linux-gnu (64-bit)
   * using session charset: UTF-8
   * using options ‘--no-tests --as-cran’
   * checking for file ‘SparkR/DESCRIPTION’ ... OK
   * checking extension type ... Package
   * this is package ‘SparkR’ version ‘3.3.1’
   * package encoding: UTF-8
   * checking CRAN incoming feasibility ... NOTE
   Maintainer: ‘The Apache Software Foundation <[email protected]>’
   
   New submission
   
   Package was archived on CRAN
   
   Found the following (possibly) invalid URLs:
     URL: 
https://spark.apache.org/docs/latest/api/R/column_aggregate_functions.html
       From: inst/doc/sparkr-vignettes.html
       Status: 404
       Message: Not Found
     URL: https://spark.apache.org/docs/latest/api/R/read.df.html
       From: inst/doc/sparkr-vignettes.html
       Status: 404
       Message: Not Found
     URL: https://spark.apache.org/docs/latest/api/R/sparkR.session.html
       From: inst/doc/sparkr-vignettes.html
       Status: 404
       Message: Not Found
   * checking package namespace information ... OK
   * checking package dependencies ... NOTE
   Package suggested but not available for checking: ‘arrow’
   * checking if this is a source package ... OK
   * checking if there is a namespace ... OK
   * checking for .dll and .exe files ... OK
   * checking for hidden files and directories ... OK
   * checking for portable file names ... OK
   * checking for sufficient/correct file permissions ... OK
   * checking whether package ‘SparkR’ can be installed ... OK
   * checking installed package size ... OK
   * checking package directory ... OK
   * checking for future file timestamps ... NOTE
   unable to verify current time
   * checking ‘build’ directory ... OK
   * checking DESCRIPTION meta-information ... OK
   * checking top-level files ... OK
   * checking for left-over files ... OK
   * checking index information ... OK
   * checking package subdirectories ... OK
   * checking R files for non-ASCII characters ... OK
   * checking R files for syntax errors ... OK
   * checking whether the package can be loaded ... OK
   * checking whether the package can be loaded with stated dependencies ... OK
   * checking whether the package can be unloaded cleanly ... OK
   * checking whether the namespace can be loaded with stated dependencies ... 
OK
   * checking whether the namespace can be unloaded cleanly ... OK
   * checking loading without being on the library search path ... OK
   * checking use of S3 registration ... OK
   * checking dependencies in R code ... OK
   * checking S3 generic/method consistency ... OK
   * checking replacement functions ... OK
   * checking foreign function calls ... OK
   * checking R code for possible problems ... NOTE
   Found if() conditions comparing class() to string:
   File ‘SparkR/R/catalog.R’: if (class(schema) == "structType") ...
   File ‘SparkR/R/catalog.R’: if (class(tableName) != "character") ...
   File ‘SparkR/R/catalog.R’: if (class(viewName) != "character") ...
   File ‘SparkR/R/catalog.R’: if (class(databaseName) != "character") ...
   File ‘SparkR/R/catalog.R’: if (!is.null(databaseName) && class(databaseName) 
!= "character") ...
   File ‘SparkR/R/catalog.R’: if (!is.null(databaseName) && class(databaseName) 
!= "character") ...
   File ‘SparkR/R/catalog.R’: if (!is.null(databaseName) && class(databaseName) 
!= "character") ...
   File ‘SparkR/R/column.R’: if (class(e2) == "Column") ...
   File ‘SparkR/R/column.R’: if (class(data) == "Column") ...
   File ‘SparkR/R/column.R’: if (class(value) == "Column") ...
   File ‘SparkR/R/column.R’: if (class(value) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(value) != "character") ...
   File ‘SparkR/R/DataFrame.R’: if (!is.null(col) && class(col) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (!is.null(col) && class(col) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (is.numeric(numPartitions) && class(col) == 
"Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(col) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (length(cols) >= 1 && class(cols[[1]]) == 
"character") ...
   File ‘SparkR/R/DataFrame.R’: if (class(value) != "Column" && 
!is.null(value)) ...
   File ‘SparkR/R/DataFrame.R’: if (class(i) != "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(c) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(col) != "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(condition) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(joinExpr) != "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(value) == "list") ...
   File ‘SparkR/R/DataFrame.R’: if (class(col) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(col) == "character") ...
   File ‘SparkR/R/DataFrame.R’: if (class(col) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/DataFrame.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(col1) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(col1) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(percentage) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(accuracy) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(schema) == "structType") ...
   File ‘SparkR/R/functions.R’: if (class(schema) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "character") ...
   File ‘SparkR/R/functions.R’: if (class(schema) == "structType") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "character") ...
   File ‘SparkR/R/functions.R’: if (class(value) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(yes) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(no) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "Column") ...
   File ‘SparkR/R/functions.R’: if (class(x) == "character") ...
   File ‘SparkR/R/functions.R’: if (class(count) == "Column") ...
   File ‘SparkR/R/group.R’: if (class(cols[[1]]) == "Column") ...
   File ‘SparkR/R/group.R’: if (class(schema) == "structType") ...
   File ‘SparkR/R/mllib_classification.R’: if (class(lowerBoundsOnCoefficients) 
!= "matrix") ...
   File ‘SparkR/R/mllib_classification.R’: if (class(upperBoundsOnCoefficients) 
!= "matrix") ...
   File ‘SparkR/R/schema.R’: if (class(x) != "character") ...
   File ‘SparkR/R/schema.R’: if (class(type) != "character") ...
   File ‘SparkR/R/schema.R’: if (class(nullable) != "logical") ...
   File ‘SparkR/R/SQLContext.R’: if (class(schema) == "structType") ...
   File ‘SparkR/R/SQLContext.R’: if (class(schema) == "structType") ...
   File ‘SparkR/R/utils.R’: if (class(key) == "integer") ...
   File ‘SparkR/R/utils.R’: if (class(key) == "numeric") ...
   File ‘SparkR/R/utils.R’: if (class(key) == "character") ...
   File ‘SparkR/R/WindowSpec.R’: if (class(col) == "character") ...
   Use inherits() (or maybe is()) instead.
   * checking Rd files ... OK
   * checking Rd metadata ... OK
   * checking Rd line widths ... OK
   * checking Rd cross-references ... OK
   * checking for missing documentation entries ... OK
   * checking for code/documentation mismatches ... OK
   * checking Rd \usage sections ... OK
   * checking Rd contents ... OK
   * checking for unstated dependencies in examples ... OK
   * checking installed files from ‘inst/doc’ ... OK
   * checking files in ‘vignettes’ ... OK
   * checking examples ... OK
   * checking for unstated dependencies in ‘tests’ ... OK
   * checking tests ... SKIPPED
   * checking for unstated dependencies in vignettes ... OK
   * checking package vignettes in ‘inst/doc’ ... OK
   * checking re-building of vignette outputs ... OK
   * checking PDF version of manual ... OK
   * skipping checking HTML version of manual: no command ‘tidy’ found
   * checking for non-standard things in the check directory ... OK
   * checking for detritus in the temp directory ... OK
   * DONE
   Status: 4 NOTEs
   Using R_SCRIPT_PATH = /usr/bin
   Removing lib path and installing from source package
   Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
   Creating a new generic function for ‘colnames’ in package ‘SparkR’
   Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
   Creating a new generic function for ‘cov’ in package ‘SparkR’
   Creating a new generic function for ‘drop’ in package ‘SparkR’
   Creating a new generic function for ‘na.omit’ in package ‘SparkR’
   Creating a new generic function for ‘filter’ in package ‘SparkR’
   Creating a new generic function for ‘intersect’ in package ‘SparkR’
   Creating a new generic function for ‘sample’ in package ‘SparkR’
   Creating a new generic function for ‘transform’ in package ‘SparkR’
   Creating a new generic function for ‘subset’ in package ‘SparkR’
   Creating a new generic function for ‘summary’ in package ‘SparkR’
   Creating a new generic function for ‘union’ in package ‘SparkR’
   Creating a new generic function for ‘endsWith’ in package ‘SparkR’
   Creating a new generic function for ‘startsWith’ in package ‘SparkR’
   Creating a new generic function for ‘lag’ in package ‘SparkR’
   Creating a new generic function for ‘rank’ in package ‘SparkR’
   Creating a new generic function for ‘sd’ in package ‘SparkR’
   Creating a new generic function for ‘var’ in package ‘SparkR’
   Creating a new generic function for ‘window’ in package ‘SparkR’
   Creating a new generic function for ‘predict’ in package ‘SparkR’
   Creating a new generic function for ‘rbind’ in package ‘SparkR’
   Creating a generic function for ‘substr’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘%in%’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘lapply’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘Filter’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘nrow’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘ncol’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘factorial’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘atan2’ from package ‘base’ in package 
‘SparkR’
   Creating a generic function for ‘ifelse’ from package ‘base’ in package 
‘SparkR’
   /opt/spark-rm/output/spark-3.3.1-bin-hadoop3/R
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to