See the following issue for a list of common tests that can be factored out:
https://github.com/scikit-learn/scikit-learn/issues/406
Mathieu
--
Live Security Virtual Conference
Exclusive live event will cover all the ways
On 7 May 2012 06:49, Andreas Mueller wrote:
> On 05/06/2012 10:46 PM, Lars Buitinck wrote:
> > 2012/5/6 Gael Varoquaux:
> >> On Sun, May 06, 2012 at 10:36:04PM +0200, Andreas Mueller wrote:
> >>> Maybe we can include it with other checks,
> >> Sounds good.
> > I suggest we solve it with a utility
On 05/06/2012 10:46 PM, Lars Buitinck wrote:
> 2012/5/6 Gael Varoquaux:
>> On Sun, May 06, 2012 at 10:36:04PM +0200, Andreas Mueller wrote:
>>> Maybe we can include it with other checks,
>> Sounds good.
> I suggest we solve it with a utility method in ClassifierMixin. That
> method could also set t
On Sun, May 06, 2012 at 10:46:31PM +0200, Lars Buitinck wrote:
> I suggest we solve it with a utility method in ClassifierMixin. That
> method could also set the classes_ attribute on the classifier.
Sounds good.
> > I'd rather use 'ptp': peak to peak:
> That won't work when class labels aren't
On Sun, May 06, 2012 at 10:44:57PM +0200, Andreas Mueller wrote:
> > a util function that we manually add to each fit method?
> Another interesting question: how do we nosetest this for all classifiers?
:)
That for later work, but I think that at some point we'll need some code
that crawls the s
2012/5/6 Gael Varoquaux :
> On Sun, May 06, 2012 at 10:36:04PM +0200, Andreas Mueller wrote:
>> Maybe we can include it with other checks,
>
> Sounds good.
I suggest we solve it with a utility method in ClassifierMixin. That
method could also set the classes_ attribute on the classifier.
>> and m
On 05/06/2012 10:33 PM, Alexandre Gramfort wrote:
>>> I feel that if a classifier is given only one class, it should somehow
>>> complain and not try to train.
>>> What do you think about that? Should all classifiers check their input
>>> in this way?
>>> Or do we just try to fit the classifier any
On Sun, May 06, 2012 at 10:40:08PM +0200, Andreas Mueller wrote:
> How would you do the inverse transform then?
OK, I didn't understand what you meant. Sorry for the noise.
--
Live Security Virtual Conference
Exclusive li
On 05/06/2012 10:38 PM, Gael Varoquaux wrote:
> On Sun, May 06, 2012 at 10:36:04PM +0200, Andreas Mueller wrote:
>> Maybe we can include it with other checks,
> Sounds good.
>
>> and maybe with a call to "unique" for the labels.
> I'd rather use 'ptp': peak to peak:
>
> In [5]: a = np.random.randin
On Sun, May 06, 2012 at 10:36:04PM +0200, Andreas Mueller wrote:
> Maybe we can include it with other checks,
Sounds good.
> and maybe with a call to "unique" for the labels.
I'd rather use 'ptp': peak to peak:
In [5]: a = np.random.randint(0, 20, 2)
In [6]: a.ptp()
Out[6]: 19
In [7]: %ti
On 05/06/2012 10:34 PM, Gael Varoquaux wrote:
> On Sun, May 06, 2012 at 10:33:27PM +0200, Alexandre Gramfort wrote:
>> a util function that we manually add to each fit method?
> Something like that, I would say.
>
Maybe we can include it with other checks, and maybe with a call to
"unique" for the
On Sun, May 06, 2012 at 10:33:27PM +0200, Alexandre Gramfort wrote:
> a util function that we manually add to each fit method?
Something like that, I would say.
G
--
Live Security Virtual Conference
Exclusive live event
>> I feel that if a classifier is given only one class, it should somehow
>> complain and not try to train.
>> What do you think about that? Should all classifiers check their input
>> in this way?
>> Or do we just try to fit the classifier any way?
>
> We should check for this and raise an excepti
2012/5/6 Andreas Mueller :
> I feel that if a classifier is given only one class, it should somehow
> complain and not try to train.
> What do you think about that? Should all classifiers check their input
> in this way?
> Or do we just try to fit the classifier any way?
We should check for this a
Hi everybody.
I just stumbled across an issue that comes up in the OutputCode classifier.
It may happen that a dataset is passed through that only has one class.
This leads to numeric issues in naive Bayes down the road.
I feel that if a classifier is given only one class, it should somehow
compl
15 matches
Mail list logo