[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2017-02-20 Thread hhbyyh
Github user hhbyyh closed the pull request at:

https://github.com/apache/spark/pull/16020


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2016-12-01 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/16020#discussion_r90599078
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/clustering/KMeans.scala 
---
@@ -334,10 +334,10 @@ class KMeans @Since("1.5.0") (
 val summary = new KMeansSummary(
   model.transform(dataset), $(predictionCol), $(featuresCol), $(k))
 model.setSummary(Some(summary))
-instr.logSuccess(model)
 if (handlePersistence) {
   instances.unpersist()
 }
+instr.logSuccess(model)
--- End diff --

The `handlePersistence` check in `KMeans` at L309 should also be updated to 
use `dataset.storageLevel`. Since we're touching KMeans here anyway we may as 
well do it now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2016-11-28 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/16020#discussion_r89780207
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -255,10 +256,19 @@ class BisectingKMeans @Since("2.0.0") (
 
   @Since("2.0.0")
   override def fit(dataset: Dataset[_]): BisectingKMeansModel = {
+val handlePersistence = dataset.rdd.getStorageLevel == 
StorageLevel.NONE
--- End diff --

Thanks for checking on this. I feel like we should have a unit test for 
this, but probably not here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2016-11-28 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/16020#discussion_r89740159
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -273,6 +283,7 @@ class BisectingKMeans @Since("2.0.0") (
 val summary = new BisectingKMeansSummary(
   model.transform(dataset), $(predictionCol), $(featuresCol), $(k))
 model.setSummary(Some(summary))
+if (handlePersistence) rdd.unpersist()
--- End diff --

Prefer 

```
if (handlePersistence) {
  rdd.unpersist()
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2016-11-28 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/16020#discussion_r89740085
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/clustering/KMeans.scala 
---
@@ -334,10 +334,8 @@ class KMeans @Since("1.5.0") (
 val summary = new KMeansSummary(
   model.transform(dataset), $(predictionCol), $(featuresCol), $(k))
 model.setSummary(Some(summary))
+if (handlePersistence) instances.unpersist()
 instr.logSuccess(model)
-if (handlePersistence) {
--- End diff --

prefer to keep this form according to style guide.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2016-11-28 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/16020#discussion_r89740051
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -255,10 +256,19 @@ class BisectingKMeans @Since("2.0.0") (
 
   @Since("2.0.0")
   override def fit(dataset: Dataset[_]): BisectingKMeansModel = {
+val handlePersistence = dataset.rdd.getStorageLevel == 
StorageLevel.NONE
--- End diff --

By the way, I've been meaning to log a ticket for this issue, but have been 
tied up.

This will actually never work. `dataset.rdd` will always have storage level 
`NONE`. To see this:

```
scala> import org.apache.spark.storage.StorageLevel
import org.apache.spark.storage.StorageLevel

scala> val df = spark.range(10).toDF("num")
df: org.apache.spark.sql.DataFrame = [num: bigint]

scala> df.storageLevel == StorageLevel.NONE
res0: Boolean = true

scala> df.persist
res1: df.type = [num: bigint]

scala> df.storageLevel == StorageLevel.MEMORY_AND_DISK
res2: Boolean = true

scala> df.rdd.getStorageLevel == StorageLevel.MEMORY_AND_DISK
res3: Boolean = false

scala> df.rdd.getStorageLevel == StorageLevel.NONE
res4: Boolean = true
```

So in fact all the algorithms that are checking for storage level using 
`dataset.rdd` are actually double-caching the data if the input DataFrame is 
actually cached, because the RDD will not appear to be cached.

So we should migrate all the checks to use `dataset.storageLevel` which was 
added in https://github.com/apache/spark/pull/13780


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16020: [SPARK-18596][ML] add checking and caching to bis...

2016-11-26 Thread hhbyyh
GitHub user hhbyyh opened a pull request:

https://github.com/apache/spark/pull/16020

[SPARK-18596][ML] add checking and caching to bisecting kmeans

## What changes were proposed in this pull request?
jira: https://issues.apache.org/jira/browse/SPARK-18596 
This is a follow up for https://issues.apache.org/jira/browse/SPARK-18356.

Check if the DataFrame sent to BisectingKMeans is cached, if not, we need 
to cache the converted RDD to ensure performance for BisectingKMeans.

## How was this patch tested?

existing unit tests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hhbyyh/spark bikcache

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16020.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16020


commit a4c88ee89f175c0672eeebe004ef3ac87a7a464a
Author: Yuhao Yang 
Date:   2016-11-27T05:34:18Z

add checking and caching to bisecting kmeans




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org