oh thanks
On Thu, Mar 19, 2015 at 3:20 PM, Joel Nothman joel.noth...@gmail.com wrote:
I should have replied here. Liblinear with sample weights:
https://github.com/scikit-learn/scikit-learn/pull/2784
On 20 March 2015 at 09:12, Charles Martin charlesmarti...@gmail.com wrote:
Yes and thanks
Neighborhood Component Analysis is more cited than ITML.
On Wed, Mar 18, 2015 at 11:39 PM, Artem barmaley@gmail.com wrote:
Hello everyone
Recently I mentioned metric learning as one of possible projects for this
years' GSoC, and would like to hear your comments.
Metric learning, as
Hello everyone,
I am a final year student of Computer Science from India. I study at the
Vishwakarma Institute of Technology in Pune. I am interested in various
areas under Machine Learning and Aritificial Intelligence. I have a
theoretical background in both these subjects and a limited
This is off-topic, but I should note that there is a patch at
https://github.com/scikit-learn/scikit-learn/pull/2784 awaiting review for
a while now...
On 20 March 2015 at 08:16, Charles Martin charlesmarti...@gmail.com wrote:
I would like to propose extending the linearSVC package
by
Hi All,
you can find my proposal for the hyperparameter optimization topic here:
* http://goo.gl/XHuav8
*
https://docs.google.com/document/d/1bAWdiu6hZ6-FhSOlhgH-7x3weTluxRfouw9op9bHBxs/edit?usp=sharing
Please give feedback!
Cheers,
Christof
On 20150310 15:27, Sturla Molden wrote:
Andreas
Hi Charles.
That is unrelated to the GSoC mail you responded to, right?
I think updating liblinear sound like a good idea, if it doesn't end up
being to complicated.
Allowing instance weights is certainly something we'd like to have.
You should check how far our code diverged, but I think for
Does anybody know of further optimization approaches that were not
mentioned below and that we could consider?
Maybe parallel computing. A grid search is an embarrassingly parallel
problem. A Bayesian optimization is not. We have the necessary framework
only to tackle embarrassingly parallel
Yes, your suggestion is viable, but I have not seen any algorithms in
sklearn that use y like that in fit method.
I would have thought in the case of Mahalanobis distances that transform
would transform each feature such that the resulting feature space was
Euclidean.
Exactly. Thus,