Luke Keele wrote:
I have had some weird results using the optim() function.  I wrote a
probit likelihood and wanted to run it with optim() with simulated
data.  I did not include a gradient at first and found that optim()
would not even iterate using BFGS and would only occasionally work
using SANN.  I programmed in the gradient and it iterates fine but the
estimates it returns are wrong.  The simulated data work fine when I
use them in the glm function in R.  And even stranger the function
seems to work fine with real data.  There must be some programming
quirk that I have overlooked.  My code is attached if anyone wants to
take a look.  I'd like to be able to make it work without the gradient
if possible.  Any help would be greatly appreciated as I am completely
stumped at this point.

Luke Keele
Dept of Political Science
UNC-Chapel Hill


When I tried this, I was getting NaN for the likelihood because phi came out to be 1 or 0 probability in some cases. Thus log(phi) or log(1-phi) returns a value optim doesn't like. To avoid this problem, use pnorm(..., log.p = TRUE) to get the log of the probability (which is what you need anyway) in your likelihood function:



#Define Likelihood llik.probit <- function(par, X, Y){ Y <- as.matrix(y) X <- as.matrix(x) phi <- pnorm(ifelse(Y == 0, -1, 1) * X%*%par, log.p = TRUE) -sum(phi) }

-sd

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

Reply via email to