In addition to Dimitris's approach, probably the following is more
straightforward..(the idea is the same, but implementation is simpler;
you do not need starting values, for instance..)
Given the linear predictor lp:
b0+b1X1+b2X2
as b2=1-b1 the lp becomes:
b0+b1X1+(1-b1)X2 =
Have a look at the linear.hypothesis function in the car package. For example:
mod.duncan - lm(prestige ~ income + education, data=Duncan)
linear.hypothesis(mod.duncan, income + education = 1)
Linear hypothesis test
Hypothesis:
income + education = 1
Model 1: prestige ~ income + education
you could reparameterize, e.g.,
x1 - runif(100, -4, 4)
x2 - runif(100, -4, 4)
X - cbind(1, x1 , x2)
y - rnorm(100, as.vector(X %*% c(5, -3, 4)), 2)
##
fn - function(betas){
betas - c(betas, 1 - betas[2])
crossprod(y - X %*% betas)[1, ]
}
opt - optim(c(5, -3), fn,