mand line
On Fri, Apr 5, 2013 at 11:23 AM, Andreas Mueller
wrote:
> On 04/05/2013 12:19 PM, Bill Power wrote:
> > I think you misunderstood me. I meant something (more efficiently
> > written) along the lines of below.
> >
> > import numpy as np
> >
> > X0 =
013 11:37, schrieb Bill Power:
> > i know this is going to sound a little silly, but I was thinking there
> > that it might be nice to be able to do this with scikit learn
> >
> > clf = sklearn.anyClassifier()
> > clf.fit( { 0: dataWithLabel0,
> > 1: da
i know this is going to sound a little silly, but I was thinking there that
it might be nice to be able to do this with scikit learn
clf = sklearn.anyClassifier()
clf.fit( { 0: dataWithLabel0,
1: dataWithLabel1 } )
instead of having to separate the data/labels manually. i guess fit wou
thanks peter. that makes sense.
does this mean that the outputs are distances to the hypersphere or
are they confidences?
is there any issue with using the non-parameterised sigmoid function
convert this confidence data to 1 (0,1] range? or is it best to just
work with the raw values themselves?
thanks lars
i figured as much. do you know if there are any ppaers in the
literature that i might be able to implement and then perhaps
contribute the code to the package? or do i have to live with either
using distances or a non-parameterised sigmoid function?
thanks
---
hi all. just looking at the one class svm and I'd like to get a
probabililty rather than a distance output. i know that in regular
svms you can get parameters for the sigmoid function from five-fold
cross validation and that's done by setting the probability=True in
the constructor. i presume it's