Github user smurching commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19186#discussion_r138136774
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
 ---
    @@ -483,24 +488,17 @@ class LogisticRegression @Since("1.2.0") (
         this
       }
     
    -  override protected[spark] def train(dataset: Dataset[_]): 
LogisticRegressionModel = {
    -    val handlePersistence = dataset.rdd.getStorageLevel == 
StorageLevel.NONE
    -    train(dataset, handlePersistence)
    -  }
    -
    -  protected[spark] def train(
    -      dataset: Dataset[_],
    -      handlePersistence: Boolean): LogisticRegressionModel = {
    +  protected[spark] def train(dataset: Dataset[_]): LogisticRegressionModel 
= {
         val w = if (!isDefined(weightCol) || $(weightCol).isEmpty) lit(1.0) 
else col($(weightCol))
         val instances: RDD[Instance] =
           dataset.select(col($(labelCol)), w, col($(featuresCol))).rdd.map {
             case Row(label: Double, weight: Double, features: Vector) =>
               Instance(label, weight, features)
           }
     
    -    if (handlePersistence) instances.persist(StorageLevel.MEMORY_AND_DISK)
    +    if ($(handlePersistence)) 
instances.persist(StorageLevel.MEMORY_AND_DISK)
    --- End diff --
    
    If `$(handlePersistence)` is `true`, we should still check that `dataset` 
is uncached (i.e. check that `dataset.storageLevel == StorageLevel.NONE`) 
before caching `instances`, or else we'll run into the issues described in 
[SPARK-21799](https://issues.apache.org/jira/browse/SPARK-21799)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to