Github user yanboliang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17274#discussion_r106810830
  
    --- Diff: R/pkg/inst/tests/testthat/test_context.R ---
    @@ -177,6 +177,13 @@ test_that("add and get file to be downloaded with 
Spark job on every node", {
       spark.addFile(path)
       download_path <- spark.getSparkFiles(filename)
       expect_equal(readLines(download_path), words)
    +
    +  # Test spark.getSparkFiles works well on executors.
    +  seq <- seq(from = 1, to = 10, length.out = 5)
    +  f <- function(seq) { readLines(spark.getSparkFiles(filename)) }
    +  results <- spark.lapply(seq, f)
    +  for (i in 1:5) { expect_equal(results[[i]], words) }
    +
    --- End diff --
    
    Reading files in the UDF is the main use of this fix, however, it can pass 
test in SparkR console and jobs submitted by ```bin/spark-submit 
test.R```(local mode) or ```bin/spark-submit --master yarn test.R```(yarn 
mode). These two scenarios are the most common use cases for this function, and 
passing the tests in real cluster is convincing enough.
    I suspect the odd failure in the previous ```run-tests.sh``` was caused by 
other issues(such as the test infrastructure), not the fix itself. So I think 
we can get this in and leave todo if we can't figure out the root cause of that 
odd test for the moment, since 2.2 code freeze is coming. What do you think of 
it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to