I am testing the LogisticRegression performance on a synthetically generated
data. The weights I have as input are

   w = [2, 3, 4]

with no intercept and three features. After training on 1000 synthetically
generated datapoint assuming random normal distribution for each, the Spark
LogisticRegression model I obtain has weights as

 [6.005520656096823,9.35980263762698,12.203400879214152]

I can see that each weight is scaled by a factor close to '3' w.r.t. the
original values. I am unable to guess the reason behind this. The code is
simple enough as


/*
 * Logistic Regression model
 */
val lr = new LogisticRegression()
  .setMaxIter(50)
  .setRegParam(0.001)
  .setElasticNetParam(0.95)
  .setFitIntercept(false)

val lrModel = lr.fit(trainingData)


println(s"${lrModel.weights}")



I would greatly appreciate if someone could shed some light on what's fishy
here.

with kind regards, Nikhil




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-LogisticRegression-returns-scaled-coefficients-tp25405.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to