Sure.
Usually, the convergence criterion can be user defined. For example, for a
linear regression problem, user might want to run the training until the
relative change in squared error falls below a specific threshold, or the
weights fail to  shift by a relative or absolute percentage.
Similarly, for example, in the kmeans problem, we again have several
different convergence criteria based on the change in wcss value, or the
relative change in centroids.

The point is to provide user with the solution before an iteration and
solution after an iteration and let them decide whether it's time to just
be done with iterating. We can very well employ the iterateWithTermination
semantics even under this by setting the second term in the return value to
originalSolution.filter(x =>  !converged)
where converged is determined by the  user defined convergence criteria. Of
course, we're free to use our own convergence criteria in case the user
doesn't specify any.

This achieves the desired effect.

This way user has more fine grained control over the training phase.
Of course, to aid the user in defining their own convergence criteria, we
can provide some generic functions in the Predictor itself, for example, to
calculate the current value of the objective function. After this, rest is
upto the imagination of the user.

Thinking more about this, I'd actually like to drop the idea of providing
an iteration state to the user. That only makes it more complicated and
further requires user to know what exactly goes in the algorithm. Usually,
the before and after solutions should suffice. I got too hung up on my
decision tree implementation and wanted to incorporate the convergence
criteria used there too.

Cheers!
Sachin

[Written from a mobile device. Might contain some typos or grammatical
errors]
On Jul 6, 2015 1:31 PM, "Theodore Vasiloudis" <
theodoros.vasilou...@gmail.com> wrote:

> Hello Sachin,
>
> could you share the motivation behind this? The iterateWithTermination
> function provides us with a means of checking for convergence during
> iterations, and checking for convergence depends highly on the algorithm
> being implemented. It could be the relative change in error, it could
> depend on the state (error+weights) history, or relative or absolute change
> in the model etc.
>
> Could you provide an example where having this function makes development
> easier? My concern is that this is a hard problem to generalize properly,
> given the dependence on the specific algorithm, model, and data.
>
> Regards,
> Theodore
>
> On Wed, Jul 1, 2015 at 9:23 PM, Sachin Goel <sachingoel0...@gmail.com>
> wrote:
>
> > Hi all
> > I'm trying to work out a general convergence framework for Machine
> Learning
> > Algorithms which utilize iterations for optimization. For now, I can
> think
> > of three kinds of convergence functions which might be useful.
> > 1. converge(data, modelBeforeIteration, modelAfterIteration)
> > 2. converge(data, modelAfterIteration)
> > 3. converge(data, modelBeforeIteration, iterationState,
> > modelAfterIteration)
> >
> > where iterationState is some state computed while performing the
> iteration.
> >
> > Algorithm implementation would have to support all three of these, if
> > possible. While specifying the {{Predictor}}, user would implement the
> > Convergence class and override these methods with their own
> implementation.
> >
> > Any feedback and design suggestions are welcome.
> >
> > Regards
> > ​​
> > Sachin Goel
> >
>

Reply via email to