Github user dorx commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1367#discussion_r15036448
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/stat/correlation/Correlation.scala 
---
    @@ -0,0 +1,121 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.mllib.stat.correlation
    +
    +import org.apache.spark.mllib.linalg.{DenseVector, Matrix, Vector}
    +import org.apache.spark.rdd.RDD
    +
    +/**
    + * New correlation algorithms should implement this trait
    + */
    +trait Correlation {
    +
    +  /**
    +   * Compute correlation for two datasets.
    +   */
    +  def computeCorrelation(x: RDD[Double], y: RDD[Double]): Double
    +
    +  /**
    +   * Compute the correlation matrix S, for the input matrix, where S(i, j) 
is the correlation
    +   * between column i and j.
    +   */
    +  def computeCorrelationMatrix(X: RDD[Vector]): Matrix
    +
    +  /**
    +   * Combine the two input RDD[Double]s into an RDD[Vector] and compute 
the correlation using the
    +   * correlation implementation for RDD[Vector]
    +   */
    +  def computeCorrelationWithMatrixImpl(x: RDD[Double], y: RDD[Double]): 
Double = {
    +    val mat: RDD[Vector] = x.zip(y).mapPartitions({ iter =>
    --- End diff --
    
    Did that before my last comment: 
    Here's zip:
    
    def zip[U: ClassTag](other: RDD[U]): RDD[(T, U)] = {
        zipPartitions(other, true) { (thisIter, otherIter) =>
          new Iterator[(T, U)] {
            def hasNext = (thisIter.hasNext, otherIter.hasNext) match {
              case (true, true) => true
              case (false, false) => false
              case _ => throw new SparkException("Can only zip RDDs with " +
                "same number of elements in each partition")
            }
            def next = (thisIter.next, otherIter.next)
          }
        }
      }
    
    zipPartitions here returns new ZippedPartitionsRDD2(sc, sc.clean(f), this, 
rdd2, preservesPartitioning), which extends ZippedPartitionsBaseRDD, which has 
the following:
    override val partitioner =
        if (preservesPartitioning) firstParent[Any].partitioner else None
    
    So yes, the partitioner is propagated from the parent via zip.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to