One way of implementing some Bayesian techniques is to add data points
based on prior knowledge. E.g., see Gelman, Carlin, Stern Rubin, in
Bayesian Data Analysis (1997) for how a prior on a regression parameter
can be interpreted as an additional data point. (Section 8.9 in my 2000
What is the context? What do the outliers represent? If you
think carefully about the context, you may find the answer.
hope this helps. spencer graves
p.s. I know statisticians who worked for HP before the split and who
still work for either HP or Agilent, I'm not certain which.
If you know that the line should pass through (0,0), would it make sense to
do a regression without an intercept? You can do that by putting -1 in
the formula, like: lm(y ~ x - 1).
Hope this helps,
Matt
Matthew Wiener
RY84-202
Applied Computer Science Mathematics Dept.
Merck Research Labs
Not a good idea, unless the regression function is *known* to be linear.
More likely it is only approximately linear over small ranges.
Murray Jorgensen
Wiener, Matthew wrote:
If you know that the line should pass through (0,0), would it make sense to
do a regression without an intercept? You
It is likely that the true relationship is nonlinear. There isn't a priori knowledge
about linearity. In the small range where we do have enough data, the relationship
looks linear. Outside the range, the data are very scarse and have high level of
noises too.
This is why adding (0,0) to the
[EMAIL PROTECTED] wrote:
Hi,
MY question is like the following:
I would like to have a robust regression line. The data I have are
mostly clustered around a small range. So
the regression line tend to be influenced strongly by outlier points
(with large cook's distance). From the