Peter's suggestion sounds good, but watch out for the match case since I
believe you'll have to match on:

case (Row(feature1, feature2, ...), Row(label)) =>

On Thu, Apr 2, 2015 at 7:57 AM, Peter Rudenko <petro.rude...@gmail.com>
wrote:

>  Hi try next code:
>
> val labeledPoints: RDD[LabeledPoint] = features.zip(labels).map{
>     case Row(feture1, feture2,..., label) => LabeledPoint(label, 
> Vectors.dense(feature1, feature2, ...))
> }
>
> Thanks,
> Peter Rudenko
>
> On 2015-04-02 17:17, drarse wrote:
>
>   Hello!,
>
> I have a questions since days ago. I am working with DataFrame and with
> Spark SQL I imported a jsonFile:
>
> /val df = sqlContext.jsonFile("file.json")/
>
> In this json I have the label and de features. I selected it:
>
> /
> val features = df.select ("feature1","feature2","feature3",...);
>
> val labels = df.select ("cassification")/
>
> But, now, I don't know create a LabeledPoint for RandomForest. I tried some
> solutions without success. Can you help me?
>
> Thanks for all!
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/From-DataFrame-to-LabeledPoint-tp22354.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>   ​
>

Reply via email to