On Mon, Feb 11, 2013 at 4:39 PM, Wei LI <li...@ee.cuhk.edu.hk> wrote:
> In my point of view, to optimize the hyperparameters can not use standard
> optimization techniques(or else it will become a parameters and cannot be
> set empirically?) So some heuristic in brute force searching maybe a good
> idea. I am thinking another heuristic to accelerate such process: maybe a
> warm start after we have trained models. I do now have any sound theory
> about this, but for SVM in particular, as the global optimal is guaranteed,
> maybe a warm start will accelerate of the process to convergence without
> biasing the trained model?

With respect to C, SVM can definitely be warm-started although nor
libsvm nor our binding allow it at the moment. With respect to kernel
parameters, I doubt that warm-start helps, although I've never tried
(my intuition is that a small perturbation in a kernel parameter can
result in a radically different solution).

Warm-start is supported in some estimators like Lasso, for example:
lasso = Lasso(warm_start=True)
scores = []
for alpha in alphas:
    lasso.set_params(alpha=alpha)
    lasso.fit(X_train, y_train)
    scores.append(lasso.score(X_test, y_test))

I created an issue for a warm-start aware grid search object:
https://github.com/scikit-learn/scikit-learn/issues/1674

Mathieu

------------------------------------------------------------------------------
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to