[
https://issues.apache.org/jira/browse/SPARK-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049887#comment-15049887
]
Sun Rui commented on SPARK-12232:
---------------------------------
[~felixcheung]
yeah, table() and df.read.table() both works on SQLContext, so there is no need
to add a new table() for exposing read.table. Current table() is enough.
SQLContext created in SparkR is of class (jobj)
> Consider exporting read.table in R
> ----------------------------------
>
> Key: SPARK-12232
> URL: https://issues.apache.org/jira/browse/SPARK-12232
> Project: Spark
> Issue Type: Bug
> Components: SparkR
> Affects Versions: 1.5.2
> Reporter: Felix Cheung
> Priority: Minor
>
> Since we have read.df, read.json, read.parquet (some in pending PRs), we have
> table() and we should consider having read.table() for consistency and
> R-likeness.
> However, this conflicts with utils::read.table which returns a R data.frame.
> It seems neither table() or read.table() is desirable in this case.
> table: https://stat.ethz.ch/R-manual/R-devel/library/base/html/table.html
> read.table:
> https://stat.ethz.ch/R-manual/R-devel/library/utils/html/read.table.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]