I did not know about stepsize being an issue. I had thought that problems with convergence in this case were due to bad approximations of the finite difference gradient. I guessed that around the optimum, numerical errors would come to dominate the gradient calculations, causing convergence to fail.
I've found that the issue about confusing optimizers that use gradients can sometimes be fixed by augmenting the original system of odes with what I believe engineers call the sensitivity equations. If your original equation is dg/dt = f(t, b), where b is a parameter to be estimated, then include d^2g/dtdb (be sure to remember the chain rule when doing this!) with the original equation. With some regularity assumptions, this integrates to dg/db, which can be used to give nls a gradient to work with. R. Woodrow Setzer, Jr. Phone: (919) 541-0128 Experimental Toxicology Division Fax: (919) 541-4284 Pharmacokinetics Branch NHEERL B143-05; US EPA; RTP, NC 27711 ______________________________________________ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help