On Wed, Mar 18, 2015 at 07:21:18PM +0300, Artem wrote:
> As to what y should look like, it depends on what we'd like the algorithm to
> do. We can go with usual y vector consisting of feature labels. Actually, LMNN
> is done this way, the optimization objective depends on the equality of labels
> only. For ITML (any many others) we need sets of (S)imilar and (D)issimilar
> pairs, which can also be inferred from labels.

> This is a bit less generic since we would imply that similarity is transitive,
> and that's not true in a general case. For the general case we'd need a way to
> feed in actual pairs. This could be done with fit having 2 optional arguments
> (similar and dissimilar) defaulted to None, which are inferred from y in case
> of absence.

For now, I don't think that we want to add new variants of the
scikit-learn API.

G

------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to