Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/3307#discussion_r20525326
--- Diff: python/pyspark/mllib/classification.py ---
@@ -111,6 +111,53 @@ def train(rdd, i):
return _regression_train_wrapper(train, LogisticRegressionModel,
data, initialWeights)
+class LogisticRegressionWithLBFGS(object):
+
+ @classmethod
+ def train(cls, data, iterations=100, initialWeights=None,
regParam=0.01, regType="l2",
+ intercept=False, corrections=10, tolerance=1e-4):
+ """
+ Train a logistic regression model on the given data.
+
+ :param data: The training data, an RDD of LabeledPoint.
+ :param iterations: The number of iterations (default: 100).
+ :param initialWeights: The initial weights (default: None).
+ :param regParam: The regularizer parameter (default: 0.01).
+ :param regType: The type of regularizer used for training
+ our model.
+
+ :Allowed values:
+ - "l1" for using L1 regularization
+ - "l2" for using L2 regularization
+ - None for no regularization
+
+ (default: "l2")
+
+ :param intercept: Boolean parameter which indicates the use
+ or not of the augmented representation for
+ training data (i.e. whether bias features
+ are activated or not).
+ :param corrections: The number of corrections used in the LBFGS
update (default: 10).
--- End diff --
doc line too wide for python
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]