Hi Andreas,

Indeed, I was not yet registered with the mail-list.

The sklearn version I have installed is 0.16.1

I did not get an error when inputing 1d X and what I receive back are
predictions as many as the length of this 1d list.

For example:

>>> from sklearn import datasets

>>> from sklearn.multiclass import OneVsOneClassifier

>>> from sklearn.svm import LinearSVC

>>> iris = datasets.load_iris()

>>> X, y = iris.data, iris.target

>>> OneVsOneClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X[1,:])

Out[*1*]: array([0, 1, 1, 1])


And by replacing X[1,:] to be X[1:2,:] which in terms of values are the
same:

>>> OneVsOneClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X[*1:2*
,:])

Out[*2*]: array([0]) # Proper output


Regards,

Othman Soufan



PhD Candidate
Mathematical and Computer Sciences and Engineering
King Abdullah University of Science and Technology
Thuwal 23955-6900
KAUST Mail Box # 2620
Kingdom of Saudi Arabia
Tel.: (+966) 506134003

On Tue, Aug 18, 2015 at 6:54 PM, Andreas Mueller <t3k...@gmail.com> wrote:

> Hi.
> I just replied to the thread above, maybe you weren't subscribed to the ml
> yet.
>
> Did you get an error when inputting a 1d X?
> Which version of scikit-learn are you on?
>
> X should really always be 2d. Unfortunately that is currently
> inconsistent, and will be fixed soon.
>
> So yes, that will be fixed, but it would be great to know the exact
> behavior you encountered,
> and the version.
>
> Thanks,
> Andy
>
>
>
> On 08/18/2015 11:50 AM, Othman Soufan wrote:
>
> Greetings Guys,
>
> I came through the contributed implementation to multiclass.py in
> Scikit-learn. I just have a suggestion for you to consider the case when
> only one testing sample is passed to decision_function "Decision function
> for the OneVsOneClassifier". As for the current implementation, an
> undesirable output comes since  n_samples = X.shape[0] will take a number
> larger than one when X is only a single list vector with some values. I may
> suggest you check the shape of X before parsing it in a particular way, or
> update the documentation to advise the user on a suggested way to get the
> prediction for one testing sample.
>
> In a sense, it is true to say that usually, there is a testing set of many
> samples but in a specific case of mine, it was preferable to predict sample
> by sample. I overcome this by using X[0:1,:] instead of X[0,:] where X is a
> testing set of several samples.
>
> Regards,
> Othman Soufan
>
>
> ------------------------------------------------------------------------------
>
>
>
> _______________________________________________
> Scikit-learn-general mailing 
> listScikit-learn-general@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to