Github user sun-rui commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11457#discussion_r54726090
  
    --- Diff: R/pkg/inst/tests/testthat/test_context.R ---
    @@ -26,7 +26,7 @@ test_that("Check masked functions", {
       maskedBySparkR <- masked[funcSparkROrEmpty]
       namesOfMasked <- c("describe", "cov", "filter", "lag", "na.omit", 
"predict", "sd", "var",
                          "colnames", "colnames<-", "intersect", "rank", 
"rbind", "sample", "subset",
    -                     "summary", "transform", "drop")
    +                     "summary", "transform", "drop", "read.csv", 
"write.csv")
    --- End diff --
    
    There are existing read.csv()/csv2() and write.csv()/csv2() in the R base 
package. It would be great that we can implement these functions in SparkR with 
the same signature in the R base package. Thus users can use these function in 
both SparkR and base package. It seems that the supported option in the 
signatures of these functions in the R base package are also supported in 
spark-csv.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to