Hello Sachin, could you share the motivation behind this? The iterateWithTermination function provides us with a means of checking for convergence during iterations, and checking for convergence depends highly on the algorithm being implemented. It could be the relative change in error, it could depend on the state (error+weights) history, or relative or absolute change in the model etc.
Could you provide an example where having this function makes development easier? My concern is that this is a hard problem to generalize properly, given the dependence on the specific algorithm, model, and data. Regards, Theodore On Wed, Jul 1, 2015 at 9:23 PM, Sachin Goel <sachingoel0...@gmail.com> wrote: > Hi all > I'm trying to work out a general convergence framework for Machine Learning > Algorithms which utilize iterations for optimization. For now, I can think > of three kinds of convergence functions which might be useful. > 1. converge(data, modelBeforeIteration, modelAfterIteration) > 2. converge(data, modelAfterIteration) > 3. converge(data, modelBeforeIteration, iterationState, > modelAfterIteration) > > where iterationState is some state computed while performing the iteration. > > Algorithm implementation would have to support all three of these, if > possible. While specifying the {{Predictor}}, user would implement the > Convergence class and override these methods with their own implementation. > > Any feedback and design suggestions are welcome. > > Regards > > Sachin Goel >