Jean-Paul Kibambe Lubamba <jean-paul.kiba...@uclouvain.be> wrote
>
>I have two questions:
>
>I am computing a linear regression model with 0 as Intercept.
>
>Well, I would like the sum of my predicted values be equal to a constant
>and therefore analyze if my coefficients are significatively different
>using or not this constraint.
>
>Does anyone know how I can constrain my model in a such way?
>
>Here is the code:
>
>data<-read.table ("input.txt", header=T, dec=".", sep="\t"); attach(data)
>lm <-lm(pop ~ ag + sav + mf -1, data=data)
>pred <- predict(lm)
>sum(pred)
>
>So I want to constrain my sum (pred) to be equal to C, with C=sum(pop)
>
>
>
>My second question is: is it possible to make the same constraint BUT with
>C as a vector of values?
>
>Let's say If I have 5 observations in 'data', with 'pop' as the first
>column. I want to compute the same model as above with a 'vector' of
>constraints. In that case, C=xi (with i from 1 to 3)
>
>x1 = sum (data[c(2,4), 1])
>x2 = sum(data [c(1,3), 1])
>x3 = data[5,1]
>
>
>Thanks in advance -- Any help is welcome!
>

This sounds very odd.  OLS linear regress *does* constrain the predicted values 
- the constraint is to minimize the sum of squared error.  There are variations 
of regression that use different constraints, like sums of absolute errors and 
so on .... but why constrain the sum of predicted values to a constant?

Peter

Peter L. Flom, PhD
Statistical Consultant
www DOT peterflomconsulting DOT com

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to