Github user jkbradley commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16441#discussion_r96708053
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala ---
    @@ -315,8 +368,9 @@ object GBTClassificationModel extends 
MLReadable[GBTClassificationModel] {
           implicit val format = DefaultFormats
           val (metadata: Metadata, treesData: Array[(Metadata, Node)], 
treeWeights: Array[Double]) =
             EnsembleModelReadWrite.loadImpl(path, sparkSession, className, 
treeClassName)
    -      val numFeatures = (metadata.metadata \ "numFeatures").extract[Int]
    -      val numTrees = (metadata.metadata \ "numTrees").extract[Int]
    +      val numFeatures = (metadata.metadata \ numFeaturesKey).extract[Int]
    +      val numTrees = (metadata.metadata \ numTreesKey).extract[Int]
    +      val numClasses = (metadata.metadata \ numClassesKey).extract[Int]
    --- End diff --
    
    This will break backwards compatibility for loading models saved in 
previous Spark versions.  We know numClasses = 2, so let's just not save the 
value for now.  If we add multiclass GBTs, then we can add numClasses to 
metadata.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to