[ 
https://issues.apache.org/jira/browse/MAHOUT-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15762885#comment-15762885
 ] 

ASF GitHub Bot commented on MAHOUT-1896:
----------------------------------------

Github user andrewpalumbo commented on a diff in the pull request:

    https://github.com/apache/mahout/pull/263#discussion_r93155654
  
    --- Diff: 
spark/src/main/scala/org/apache/mahout/sparkbindings/package.scala ---
    @@ -141,6 +144,42 @@ package object sparkbindings {
         new CheckpointedDrmSpark[K](rddInput = rdd, _nrow = nrow, _ncol = 
ncol, cacheHint = cacheHint,
           _canHaveMissingRows = canHaveMissingRows)
     
    +  /** A drmWrap version that takes an 
RDD[org.apache.spark.mllib.regression.LabeledPoint]
    +    * returns a DRM where column 0 is the label */
    +  def drmWrapMLLibLabeledPoint(rdd: RDD[LabeledPoint],
    +                   nrow: Long = -1,
    +                   ncol: Int = -1,
    +                   cacheHint: CacheHint.CacheHint = CacheHint.NONE,
    +                   canHaveMissingRows: Boolean = false): 
CheckpointedDrm[Int] = {
    +    val drmRDD: DrmRdd[Int] = rdd.zipWithIndex.map( lv => (lv._2.toInt,
    +      new org.apache.mahout.math.DenseVector( Array(lv._1.label) ++ 
lv._1.features.toArray)) )
    --- End diff --
    
    I don't think that this should always be dense



> Add convenience methods for interacting with Spark ML
> -----------------------------------------------------
>
>                 Key: MAHOUT-1896
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1896
>             Project: Mahout
>          Issue Type: Bug
>    Affects Versions: 0.12.2
>            Reporter: Trevor Grant
>            Assignee: Trevor Grant
>            Priority: Minor
>             Fix For: 0.13.0
>
>
> Currently the method for ingesting RDDs to DRM is `drmWrap`.  This is a 
> flexible method, however there are many cases when the RDD to be wrapped is 
> either RDD[org.apache.spark.mllib.lingalg.Vector], 
> RDD[org.apache.spark.mllib.regression.LabeledPoint], or DataFrame[Row] (as is 
> the case when working with SparkML.  It makes sense to create convenience 
> methods for converting these types to DRM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to