I think I understand this but I wanted to check: performing a 
regression by minimizing sum of squared errors produces a curve that 
goes through the mean of the dependent variable at each value of the 
independent variable (e.g. if my regression produces y=f(x) then 
f(x)=E[Y|X=x]).  Performing a regression by minimizing the sum of 
absolute errors results in a curve that goes through the median value 
of the dependent variable.

If this is correct then an absolute error and a mean error regression 
should produce exactly the same results if the errors are normally 
distributed (as the mean and the median are the same) given a 
suitably large sample.  However, while both absolute and sum of 
squared regessions produce an unbiased estimate of the mean (assuming 
normally distributed errors) the squared error approach produces a 
more efficient estimate (in other words both will converge to the 
mean as sample size is increased but the squared error estimate will 
converge more quickly).

Do I have any of this right?

Thanks,
Oliver

PS: I already posted this to alt.sci.math.statistics.prediction but 
then noticed that these groups are much more active.  Please forgive 
the cross-posting.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to