Hi everyone,
this is probably just a question of syntax which is not very clear to me,
so I wanted to clear something up:
On the http://scikit-learn.org/dev/modules/linear_model.html#linear-model page,
the equation that is solved for the linear model is :
Xw - y
If y is a row vector (the number of responses is 1), this works ok:
X = np.linspace(1, 400, 400).reshape(20, 20)
w = np.linspace(1, 20, 20)
y = np.dot(X, w.T)
lr = linear_model.LinearRegression(fit_intercept=False)
w2 = lr.fit(X, y).coef_
RSS = np.sum((np.dot(X,w2)-y)**2) ~ 0
On the flipside, if y is a matrix:
X = np.linspace(1, 400, 400).reshape(20, 20)
w = np.linspace(1, 400, 400).reshape(20, 20)
y = np.dot(X, w.T)
lr = linear_model.LinearRegression(fit_intercept=False)
w2 = lr.fit(X, y).coef_
RSS = np.sum((np.dot(X,w2)-y)**2) ~ 102812157419999.23
RSS = np.sum((np.dot(X,w2.T)-y)**2) ~ 0
Is there any reason why in the case of a matrix y, the coefficients
returned by the linear regression seem to be a transpose of the actual
coefficient matrix? I am sure this is just a case of syntax/convenience,
but I was actually trying to understand the logic.
Federico
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general