Github user srowen commented on a diff in the pull request: https://github.com/apache/spark/pull/10554#discussion_r48938992 --- Diff: core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala --- @@ -288,17 +288,18 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)]) * immediately to the master as a Map. This will also perform the merging locally on each mapper * before sending results to a reducer, similarly to a "combiner" in MapReduce. */ - def reduceByKeyLocally(func: JFunction2[V, V, V]): java.util.Map[K, V] = + def reduceByKeyLocally(func: JFunction2[V, V, V]): JMap[K, V] = mapAsSerializableJavaMap(rdd.reduceByKeyLocally(func)) /** Count the number of elements for each key, and return the result to the master as a Map. */ - def countByKey(): java.util.Map[K, Long] = mapAsSerializableJavaMap(rdd.countByKey()) + def countByKey(): JMap[K, JLong] = + mapAsSerializableJavaMap(rdd.countByKey().mapValues(JLong.valueOf)) --- End diff -- This is what I was referring to in the last comment -- I realized that this is how `RDD.countByValue` is implemented, so I was picking consistency. I don't feel strongly about it. For the other naming -- yeah I don't know what to be consistent with. For the files I'm touching, I'll try to use `java.util.` and `jl.Long` consistently even if it means updating some other bits of these files, and leave it at that.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org