[R] Covariance of beta and lambda in geoR likfit

2007-08-20 Thread Rubén Roa-Ureta
Greetings comRades:
Has anyone worked out a procedure to calculate the estimated covariance 
between the beta and lambda estimates in the likelihood-based 
geostatistical model implemented in package geoR? If so, please help me 
out by letting me know.
Do I have reasons to expect that this covariance term would be big or 
perhaps more happily, that it could be discarded without much effect?
Thanks.
Rubén

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Time when object was created

2007-07-25 Thread Rubén Roa-Ureta
Dear ComRades,
Last night I left an Intel Core Duo Windows Vista system running an 
extensive mixed glm with lmer. This morning I found that R-lmer had 
finished the job successfully. I would like to know at what time the 
object containing the results (call it lme_1) was finished. I guess 
there is some simple function that when applied to the object would give 
me the time stamp?
Thanks
Rubén

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to make a table of a desired dimension

2007-06-08 Thread Rubén Roa-Ureta
Hi ComRades,

I want to make a matrix of frequencies from vectors of a continuous 
variable spanning different values. For example this code
x-c(runif(100,10,40),runif(100,43,55))
y-c(runif(100,7,35),runif(100,37,50))
z-c(runif(100,10,42),runif(100,45,52))
a-table(ceiling(x))
b-table(ceiling(y))
c-table(ceiling(z))
a
b
c

will give me three tables that start and end at different integer 
values, and besides, they have 'holes' in between different integer 
values. Is it possible to use 'table' to make these three tables have 
the same dimensions, filling in the absent labels with zeroes? In the 
example above, the desired tables should all start at 8 and tables 'a' 
and 'c' should put a zero at labels '8' to '10', should all put zeros in 
the frequencies of the labels corresponding to the holes, and should all 
end at label '55'. The final purpose is the make a matrix and use 
'matplot' to plot all the frequencies in one plot, such as

#code valid only when 'a', 'b', and 'c' have the proper dimension
p-mat.or.vec(48,4)
p[,1]-8:55
p[,2]-c(matrix(a)[1:48])
p[,3]-c(matrix(b)[1:48])
p[,4]-c(matrix(c)[1:48])
matplot(p)

I read the help about 'table' but I couldn't figure out if dnn, 
deparse.level, or the other arguments could serve my purpose. Thanks for 
your help
Rubén

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ED50 from logistic model with interactions

2007-05-02 Thread Rubén Roa
Kate Stark wrote:
 Hi,

 I was wondering if someone could please help me. I am doing a logistic
 regression to compare size at maturity between 3 seasons. My model is:

 fit - glm(Mature ~ Season * Size - 1, family = binomial, data=dat)

 where Mature is a binary response, 0 for immature, 1 for mature. There
 are 3 Seasons.

 The Season * Size interaction is significant. I would like to compare the 
 size at 50% maturity between Seasons, which I have calculated as:

 Mat50_S1 - -fit$coef[1]/fit$coef[4]
 Mat50_S2 - -fit$coef[2]/(fit$coef[4] + fit$coef[5])
 Mat50_S3 - -fit$coef[3]/(fit$coef[4] + fit$coef[6])

 But I am not sure how to calculate the standard error around each of
 these estimates. The p.dose function from the MASS package does this
 automatically, but it doesn’t seem to allow interaction terms.

 In Faraway(2006) he has an example using the delta method to calculate
 the StdErr, but again without any interactions. I can apply this for the
 first Season, as there is just one intercept and one slope coefficient,
 but for the other 2 Seasons, the slope is a combination of the Size
 coefficient and the Size*Season coefficient, and I am not sure how to use 
 the covariance matrix in the delta calculation.

 I could divide the data and do 3 different logistic regressions, one for
 each season, but while the Mat50 (i.e. mean Size at 50% maturity) is the
 same as that calculated by the separate lines regression, Im not sure how 
 this may change the StdErr?

 Regards,

 Kate
   
Hi,
Maybe you can re-parameterize the logistic model in such a way that the 
size at 50% maturity for each season are explicit parameters in the 
model, so the support function is maximized for those arguments and you 
get the standard errors directly. For example, the following code 
estimate the size at 50% maturity when the only predictor variable is 
size. I guess you can use it to generalize to the case of an additional 
factor such as season. I wrote this code as a translation from ADMB to R 
so it is rather detailed (it uses nlm for maximization). count is real 
data from a squid population and size (l) is in cm. Note in prop.ini, 
prop.est, and prop.fit the reparameterization that introduces the size 
at 50% maturity directly (along with the size at 95% maturity).
Rubén Roa-Ureta

l-sort(c(8:33,8:33))
mat-rep.int(x=c(0,1),times=26)
imat-rep.int(x=c(1,0),times=26)
count-c(2,0,3,0,3,0,6,0,7,0,9,0,6,0,3,0,6,0,2,0,4,0,1,0,4,1,2,1,2,0,1,2,1,1,2,0,1,2,2,2,0,0,1,1,0,0,0,0,0,0,0,1)
l.plot-8:33
contot-vector(mode=numeric,length=26)
prop.obs-vector(mode=numeric,length=26)
for (i in 1:26) {
countot[i]-count[i*2-1]+count[i*2]
ifelse(countot[i]0,prop.obs[i]-count[i*2]/countot[i],NA)
}
l_50=25
l_95=35
prop.ini-1/(1+exp((log(1/19))*((l.plot-l_50)/(l_95-l_50
plot(l.plot,prop.obs)
lines(l.plot,prop.ini)
fn-function(p){
prop.est-1/(1+exp(log(1/19)*(l-p[1])/(p[2]-p[1])));
iprop.est-1-prop.est;
negloglik--sum(count*(mat*log(prop.est)+imat*log(iprop.est)));
}
prop.lik-nlm(fn,p=c(25.8,33.9),hessian=TRUE)
prop.lik
L_50-prop.lik$estimate[1]
L_95-prop.lik$estimate[2]
prop.covmat-solve(prop.lik$hessian)
seL_50-sqrt(prop.covmat[1,1])
seL_95-sqrt(prop.covmat[2,2])
covL_50_95-prop.covmat[1,2]
prop.fit-1/(1+exp((log(1/19))*((l_plot-L_50)/(L_95-L_50
plot(l.plot,prop.obs,pch=19,xlab=Length (cm),ylab=Proportion Mature)
lines(l.plot,prop.fit)
text(x=12.5,y=0.9,expression(paste(p(l)=,frac(1,1+e^frac(ln(1/19)(l-l[50]),(l[95]-l[50]))
text(x=12.5,y=0.7,expression(paste(hat(l)[50],=25.8)))
text(x=12.5,y=0.55,expression(paste(hat(l)[95],=34.0)))

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to solve for parameter in likelihood?

2006-10-16 Thread Rubén Roa
Hi,
I want to calculate, for any likelihood function, the value(s) of the
parameter of interest that makes the likelihood equal to 1/8-th of the
value of the likelihood at the maximum. Say, with the following toy
example with a binomial distribution and with interest in p:

p-seq(0.01,0.99,by=0.001)
n=113
y=28
llbin-y*log(p)+(n-y)*log(1-p) #log-likelihood
lbin-exp(llbin) #likelihood
rlbin-mat.or.vec(length(p),nc=1) #relative-likelihood
for (i in 1:length(p)) {
  rlbin[i]-lbin[i]/max(lbin)
  }

How do I obtain the value of p such that rlbin=0.125?
It doesn't seem to me that I can do this using solve.
I can do this

rlbinp-cbind(rlbin,p)
rlbinp

and actually look at the two values of p that make the rlbin go down to
0.125*max(rlbin), but there must be another way.
The corresponding function in Octave (and Matlab) is fzero.
Thanks in advance,
Rubén

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.