Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/11457#discussion_r54846091
--- Diff: R/pkg/inst/tests/testthat/test_context.R ---
@@ -26,7 +26,7 @@ test_that("Check masked functions", {
maskedBySparkR <- masked[funcSparkROrEmpty]
namesOfMasked <- c("describe", "cov", "filter", "lag", "na.omit",
"predict", "sd", "var",
"colnames", "colnames<-", "intersect", "rank",
"rbind", "sample", "subset",
- "summary", "transform", "drop")
+ "summary", "transform", "drop", "read.csv",
"write.csv")
--- End diff --
@felixcheung Actually, I have been looking up the link and working on this;
however, I realised that it actually cannot use the extactly same parameters as
spark-csv does not support all. So, I tried to add some possible parameters
(locally) but it looks it ended up with incomplete versions.
This drags me into a thought that this one might have to me merged first
and then deal with them later (also partly because CSV options are not
confirmed yet and they are not fully merged yet,
[SPARK-12420](https://issues.apache.org/jira/browse/SPARK-12420)).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]