[ 
https://issues.apache.org/jira/browse/SPARK-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051351#comment-15051351
 ] 

Shivaram Venkataraman commented on SPARK-12232:
-----------------------------------------------

Yeah I'm not sure we should expose read.table. I think read.df is doing a 
similar function and has the added advantage of not conflicting with existing R 
functions. To convert from a SQL table to a DataFrame I think the idea of 
having some new keyword like `sqlTableToDF` makes more sense to me. We can 
deprecate `table` in SparkR (again not very good to conflict with contingency 
tables even if they are not popular) and consider removing it in 2.0

> Consider exporting read.table in R
> ----------------------------------
>
>                 Key: SPARK-12232
>                 URL: https://issues.apache.org/jira/browse/SPARK-12232
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 1.5.2
>            Reporter: Felix Cheung
>            Priority: Minor
>
> Since we have read.df, read.json, read.parquet (some in pending PRs), we have 
> table() and we should consider having read.table() for consistency and 
> R-likeness.
> However, this conflicts with utils::read.table which returns a R data.frame.
> It seems neither table() or read.table() is desirable in this case.
> table: https://stat.ethz.ch/R-manual/R-devel/library/base/html/table.html
> read.table: 
> https://stat.ethz.ch/R-manual/R-devel/library/utils/html/read.table.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to