Github user yanboliang commented on the pull request:
https://github.com/apache/spark/pull/4911#issuecomment-77572142
@mengxr Yes, it make sense. After look through the code, I found we have
two alternatives:
1, Implement a new PythonMLLibAPI looks like this
def newGeneralizedLinearModel(
modelClass: String,
weights: Vector,
intercept: Double): GeneralizedLinearModel {
}
And at pyspark we could get corresponding java model by this api and call
save/load of java model which can make the model saved in Scala loadable in
Python and vice versa.
2, Implement the save/load operation in Python independently which do the
same thing as in Scala.
I prefer the first one. Because in the second scenario, if we update the
save/load in Scala we need to keep the save/load function in Python have the
same behavior which may leads to inconsistent.
Any comments and suggestions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]