Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/897#discussion_r13357467
  
    --- Diff: core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala 
---
    @@ -672,40 +672,102 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
     
       /**
        * Return approximate number of distinct values for each key in this RDD.
    -   * The accuracy of approximation can be controlled through the relative 
standard deviation
    -   * (relativeSD) parameter, which also controls the amount of memory 
used. Lower values result in
    -   * more accurate counts but increase the memory footprint and vise 
versa. Uses the provided
    -   * Partitioner to partition the output RDD.
    +   *
    +   * The algorithm used is based on streamlib's implementation of 
"HyperLogLog in Practice:
    +   * Algorithmic Engineering of a State of The Art Cardinality Estimation 
Algorithm", available
    +   * <a href="http://research.google.com/pubs/pub40671.html";>here</a>.
    +   *
    +   * @param p The precision value for the normal set.
    +   *          `p` must be a value between 4 and `sp` (32 max).
    +   * @param sp The precision value for the sparse set, between 0 and 32.
    +   *           If `sp` equals 0, the sparse representation is skipped.
    +   * @param partitioner Partitioner to use for the resulting RDD.
        */
    -  def countApproxDistinctByKey(relativeSD: Double, partitioner: 
Partitioner): JavaRDD[(K, Long)] = {
    -    rdd.countApproxDistinctByKey(relativeSD, partitioner)
    +  def countApproxDistinctByKey(p: Int, sp: Int, partitioner: Partitioner): 
JavaPairRDD[K, Long] = {
    +    fromRDD(rdd.countApproxDistinctByKey(p, sp, partitioner))
       }
     
       /**
    -   * Return approximate number of distinct values for each key this RDD.
    -   * The accuracy of approximation can be controlled through the relative 
standard deviation
    -   * (relativeSD) parameter, which also controls the amount of memory 
used. Lower values result in
    -   * more accurate counts but increase the memory footprint and vise 
versa. The default value of
    -   * relativeSD is 0.05. Hash-partitions the output RDD using the existing 
partitioner/parallelism
    -   * level.
    +   * Return approximate number of distinct values for each key in this RDD.
    +   *
    +   * The algorithm used is based on streamlib's implementation of 
"HyperLogLog in Practice:
    +   * Algorithmic Engineering of a State of The Art Cardinality Estimation 
Algorithm", available
    +   * <a href="http://research.google.com/pubs/pub40671.html";>here</a>.
    +   *
    +   * @param p The precision value for the normal set.
    +   *          `p` must be a value between 4 and `sp` (32 max).
    +   * @param sp The precision value for the sparse set, between 0 and 32.
    +   *           If `sp` equals 0, the sparse representation is skipped.
    +   * @param numPartitions The number of partitions in the resulting RDD.
        */
    -  def countApproxDistinctByKey(relativeSD: Double = 0.05): JavaRDD[(K, 
Long)] = {
    -    rdd.countApproxDistinctByKey(relativeSD)
    +  def countApproxDistinctByKey(p: Int, sp: Int, numPartitions: Int): 
JavaPairRDD[K, Long] = {
    +    fromRDD(rdd.countApproxDistinctByKey(p, sp, numPartitions))
       }
     
    -
       /**
        * Return approximate number of distinct values for each key in this RDD.
    -   * The accuracy of approximation can be controlled through the relative 
standard deviation
    -   * (relativeSD) parameter, which also controls the amount of memory 
used. Lower values result in
    -   * more accurate counts but increase the memory footprint and vise 
versa. HashPartitions the
    -   * output RDD into numPartitions.
        *
    +   * The algorithm used is based on streamlib's implementation of 
"HyperLogLog in Practice:
    +   * Algorithmic Engineering of a State of The Art Cardinality Estimation 
Algorithm", available
    +   * <a href="http://research.google.com/pubs/pub40671.html";>here</a>.
    +   *
    +   * @param p The precision value for the normal set.
    +   *          `p` must be a value between 4 and `sp` (32 max).
    +   * @param sp The precision value for the sparse set, between 0 and 32.
    +   *           If `sp` equals 0, the sparse representation is skipped.
    +   */
    +  def countApproxDistinctByKey(p: Int, sp: Int): JavaPairRDD[K, Long] = {
    +    fromRDD(rdd.countApproxDistinctByKey(p, sp))
    +  }
    +
    +  /**
    +   * Return approximate number of distinct values for each key in this 
RDD. This is deprecated.
    +   * Use the variant with `p` and `sp` parameters instead.
    +   *
    +   * The algorithm used is based on streamlib's implementation of 
"HyperLogLog in Practice:
    +   * Algorithmic Engineering of a State of The Art Cardinality Estimation 
Algorithm", available
    +   * <a href="http://research.google.com/pubs/pub40671.html";>here</a>.
    +   *
    +   * @param relativeSD The relative standard deviation for the counter.
    +   *                   Smaller values create counters that require more 
space.
        */
    +  @Deprecated
    --- End diff --
    
    We need to provide some migration tips. Is there a mapping from 
`relativeSD` to precision numbers? Actually, `relativeSD` is much easier for 
users to understand.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to