Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/493#discussion_r11985831
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/recommendation/ALS.scala ---
@@ -708,6 +708,86 @@ object ALS {
trainImplicit(ratings, rank, iterations, 0.01, -1, 1.0)
}
+ /**
+ * :: DeveloperApi ::
+ * Given an RDD of ratings, a rank, and two partitioners, compute rough
estimates of the
+ * computation time and communication cost of one iteration of ALS.
Returns a pair of pairs of
+ * Maps. The first pair of maps represents computation time in
unspecified units. The second
+ * pair of maps represents communication cost in uncompressed bytes.
The first element of each
+ * pair is the cost attributable to user partitioning, while the second
is the cost attributable
+ * to product partitioning.
+ *
+ * @param ratings RDD of Rating objects
+ * @param rank number of features to use
+ * @param userPartitioner partitioner for partitioning users
+ * @param productPartitioner partitioner for partitioning products
+ */
+ @DeveloperApi
+ def evaluatePartitioner(ratings: RDD[Rating], rank: Int,
userPartitioner: Partitioner,
--- End diff --
Do you think `estimateCosts` is a better name? Can we hide this function
and provide `analyze(ratings, rank, numUserBlocks, numProductBlocks)` because
it is easy to try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---