Github user MLnick commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17742#discussion_r114726541
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala
 ---
    @@ -274,46 +275,62 @@ object MatrixFactorizationModel extends 
Loader[MatrixFactorizationModel] {
           srcFeatures: RDD[(Int, Array[Double])],
           dstFeatures: RDD[(Int, Array[Double])],
           num: Int): RDD[(Int, Array[(Int, Double)])] = {
    -    val srcBlocks = blockify(rank, srcFeatures)
    -    val dstBlocks = blockify(rank, dstFeatures)
    -    val ratings = srcBlocks.cartesian(dstBlocks).flatMap {
    -      case ((srcIds, srcFactors), (dstIds, dstFactors)) =>
    -        val m = srcIds.length
    -        val n = dstIds.length
    -        val ratings = srcFactors.transpose.multiply(dstFactors)
    -        val output = new Array[(Int, (Int, Double))](m * n)
    -        var k = 0
    -        ratings.foreachActive { (i, j, r) =>
    -          output(k) = (srcIds(i), (dstIds(j), r))
    -          k += 1
    +    val srcBlocks = blockify(srcFeatures)
    +    val dstBlocks = blockify(dstFeatures)
    +    /**
    +     * The previous approach used for computing top-k recommendations 
aimed to group
    +     * individual factor vectors into blocks, so that Level 3 BLAS 
operations (gemm) could
    +     * be used for efficiency. However, this causes excessive GC pressure 
due to the large
    +     * arrays required for intermediate result storage, as well as a high 
sensitivity to the
    +     * block size used.
    +     * The following approach still groups factors into blocks, but 
instead computes the
    +     * top-k elements per block, using Level 1 BLAS (dot) and an efficient
    +     * BoundedPriorityQueue. This avoids any large intermediate data 
structures and results
    +     * in significantly reduced GC pressure as well as shuffle data, which 
far outweighs
    +     * any cost incurred from not using Level 3 BLAS operations.
    +     */
    +    val ratings = srcBlocks.cartesian(dstBlocks).flatMap { case (srcIter, 
dstIter) =>
    +      val m = srcIter.size
    +      val n = math.min(dstIter.size, num)
    +      val output = new Array[(Int, (Int, Double))](m * n)
    +      var j = 0
    +      val pq = new BoundedPriorityQueue[(Int, 
Double)](n)(Ordering.by(_._2))
    +      srcIter.foreach { case (srcId, srcFactor) =>
    +        dstIter.foreach { case (dstId, dstFactor) =>
    +          /**
    +           * Compared with BLAS.dot, the hand-written version below is 
more efficient than a call
    +           * to the native BLAS backend and the same performance as the 
fallback F2jBLAS backend.
    +           */
    +          var score: Double = 0
    +          var k = 0
    +          while (k < rank) {
    +            score += srcFactor(k) * dstFactor(k)
    +            k += 1
    +          }
    +          pq += ((dstId, score))
    +        }
    +        val pqIter = pq.iterator
    +        var i = 0
    +        while (i < n) {
    +          output(j + i) = (srcId, pqIter.next())
    +          i += 1
             }
    -        output.toSeq
    +        j += n
    +        pq.clear()
    +      }
    +      output.toSeq
         }
         ratings.topByKey(num)(Ordering.by(_._2))
       }
     
       /**
    -   * Blockifies features to use Level-3 BLAS.
    +   * Blockifies features to improve the efficiency of cartesian product
        */
       private def blockify(
    -      rank: Int,
    -      features: RDD[(Int, Array[Double])]): RDD[(Array[Int], DenseMatrix)] 
= {
    +      features: RDD[(Int, Array[Double])]): RDD[Seq[(Int, Array[Double])]] 
= {
    --- End diff --
    
    Yes, less sensitive. See https://issues.apache.org/jira/browse/SPARK-20443. 
It may be that we make the block size tunable - or by experiments set a block 
size that seems generally optimal (2048 in those experiments seems best). 
    
    But we would need to perform experiments over a wide range of data sizes 
(and check both `recommendForAllUsers` and `recommendForAllItems` performance).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to