Sun, Rui wrote:
>>>
>>>> The existing algorithms operating on R data.frame can't simply operate
>>>> on SparkR DataFrame. They have to be re-implemented to be based on SparkR
>>>> DataFrame API.
>>>>
>>>> -----Original Message
kR DataFrame. They have to be re-implemented to be based on SparkR
>>> DataFrame API.
>>>
>>> -Original Message-
>>> From: ekraffmiller [mailto:ellen.kraffmil...@gmail.com]
>>> Sent: Thursday, September 17, 2015 3:30 AM
>>> To: user@spark.
t: Thursday, September 17, 2015 3:30 AM
>> To: user@spark.apache.org
>> Subject: SparkR - calling as.vector() with rdd dataframe causes error
>>
>> Hi,
>> I have a library of clustering algorithms that I'm trying to run in the
>> SparkR interactive shell.
t; Sent: Thursday, September 17, 2015 3:30 AM
> To: user@spark.apache.org
> Subject: SparkR - calling as.vector() with rdd dataframe causes error
>
> Hi,
> I have a library of clustering algorithms that I'm trying to run in the
> SparkR interactive shell. (I am working on a proof
t; DataFrame API.
>
> -Original Message-
> From: ekraffmiller [mailto:ellen.kraffmil...@gmail.com]
> Sent: Thursday, September 17, 2015 3:30 AM
> To: user@spark.apache.org
> Subject: SparkR - calling as.vector() with rdd dataframe causes error
>
> Hi,
> I have a library of
user@spark.apache.org
Subject: SparkR - calling as.vector() with rdd dataframe causes error
Hi,
I have a library of clustering algorithms that I'm trying to run in the SparkR
interactive shell. (I am working on a proof of concept for a document
classification tool.) Each algorithm ta
Also, just for completeness, matrix.csv contains:
1,2,3
4,5,6
7,8,9
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkR-calling-as-vector-with-rdd-dataframe-causes-error-tp24717p24719.html
Sent from the Apache Spark User List mailing list archive at Nabbl
Hi,
I have a library of clustering algorithms that I'm trying to run in the
SparkR interactive shell. (I am working on a proof of concept for a document
classification tool.) Each algorithm takes a term document matrix in the
form of a dataframe. When I pass the method a local dataframe, the
clust