Hi Spencer & Andy: Thanks for your thoughtful input! I did at one point look at the optim() function & run debug on it (wasn't aware of browser--that's helpful!). My impression is that optim() simply calls a C function that handles the maximization. So if I want to break out of my likelihood function to restart optim() w/ new values, it seems I'd have to somehow communicate to C that it's time to stop. May need to rewrite the C, with which I'm not familiar--Java yes, so maybe when I have some real free time....
Another possibility might be finding some jerry-rigged way to break out of optim. Maybe if I tell the likelihood function to freeze its returned value at some point, optim will conclude it's done and stop. Probably inefficient & I will have the problem of telling when the break point ought to occur. Just wish there were some programmatic way to say 'stop this and return control to the higher-level calling function 'blah''. A third possibility is one suggested by Spencer who seems to think it's ok for the routine to pursue multiple branches w/o restarting, hence no restart problem. But w/ Newtonian-style convergence the latent scale values (which are parameters to be estimated) have current positions & are supposed to smoothly move toward lower likelihood values. What will happen in branched convergence, however, is that some of the latent values will prove to have better values on the other side of a normal curve from their current position. My guess is that this will cause the likelihood function to make a sudden, non-continuous jump not predictable by derivatives, which may mean it can't converge properly. Spencer's MDS alternative is intriguing & I'll need to think more about it. Maybe I should also consider full-out Bayesian Monte Carlo methods (if I have time), which would simultaneously explore the whole solution space. Thanks, Peter On 11/2/05 9:01 PM, "Spencer Graves" <[EMAIL PROTECTED]> wrote: > Have you looked at the code for "optim"? If you execute "optim", it > will list the code. You can copy that into a script file and walk > through it line by line to figure out what it does. By doing this, you > should be able to find a place in the iteration where you can test both > branches of each bifurcation and pick one -- or keep a list of however > many you want and follow them all more or less simultaneously, pruning > the ones that seem too implausible. Then you can alternate between a > piece of the "optim" code, bifurcating and pruning, adjusting each and > printing intermediate progress reports to help you understand what it's > doing and how you might want to modify it. > > With a bit more effort, you can get the official source code with > comments. To do that, I think you go to "www.r-project.org" -> CRAN -> > (select a local mirror) -> "Software: R sources". From there, just > download "The latest release: R-2.2.0.tar.gz". > > For more detailed help, I suggest you try to think of the simplest > possible toy problem that still contains one of the issues you find most > difficult. Then send that to this list. If readers can copy a few > lines of R code from your email into R and try a couple of things in > less than a minute, I think you might get more useful replies quicker. On 11/3/05 8:08 AM, "Liaw, Andy" <[EMAIL PROTECTED]> wrote: > Alternatively, just type debug(optim) before using it, then step through it > by hitting enter repeatedly... > > When you're done, do undebug(optim). On 11/3/05 11:06 AM, "Liaw, Andy" <[EMAIL PROTECTED]> wrote: > Essentially all that debug() does is like inserting browser() as the first > line of the function being debug()ed. You can type just about any command > at the browser> prompt, e.g., for checking data, etc. ?browser has list of > special commands for the browser> prompt. > > Andy ______________________________________________ [email protected] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
