Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4233#discussion_r23887814
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/classification/LogisticRegression.scala
 ---
    @@ -68,6 +79,65 @@ class LogisticRegressionModel (
           case None => score
         }
       }
    +
    +  override def save(sc: SparkContext, path: String): Unit = {
    +    val sqlContext = new SQLContext(sc)
    +    import sqlContext._
    +
    +    // Create JSON metadata.
    +    val metadata = LogisticRegressionModel.Metadata(
    +      clazz = this.getClass.getName, version = Exportable.latestVersion)
    --- End diff --
    
    Should each model have its own version? For example, we might add some 
statistics to LRModel and save them during model export. If the version is 
global, we might have trouble loading it back.
    
    With the new DataFrame API, this could be done by
    
    ~~~
    sc.parallelize(Seq((clazz, version))).toDataFrame("clazz", "version")
    ~~~
    
    and hence we don't need the case class and it is easier to use "class" 
instead of "clazz".
    
    For the JSON format, we can use json4s directly and save as `RDD[String]`. 
The code will be cleaner and have less dependency.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to