[
https://issues.apache.org/jira/browse/SPARK-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14595153#comment-14595153
]
Shivaram Venkataraman commented on SPARK-7499:
----------------------------------------------
[~sd2k] Thanks for taking a shot at this -- One question I have about this
approach is if we can avoid implementing new versions of all these functions.
i.e. is there a common sub-routine we can create that given a set of
identifiers / columns will resolve them to SparkR DataFrame columns ?
Also just to help me understand this better, what is the specific function in
dplyr we are using here ?
> Investigate how to specify columns in SparkR without $ or strings
> -----------------------------------------------------------------
>
> Key: SPARK-7499
> URL: https://issues.apache.org/jira/browse/SPARK-7499
> Project: Spark
> Issue Type: Improvement
> Components: SparkR
> Reporter: Shivaram Venkataraman
>
> Right now in SparkR we need to specify the columns used using `$` or strings.
> For example to run select we would do
> {code}
> df1 <- select(df, df$age > 10)
> {code}
> It would be good to infer the set of columns in a dataframe automatically and
> resolve symbols for column names. For example
> {code}
> df1 <- select(df, age > 10)
> {code}
> One way to do this is to build an environment with all the column names to
> column handles and then use `substitute(arg, env = columnNameEnv)`
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]