Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-08 Thread Thomas Evangelidis
It finally  works with nu=0.01 or less and the predictions are good. Is
there a problem with that?

On 8 December 2016 at 12:57, Thomas Evangelidis  wrote:

>
>
>>
>> @Thomas
>> I still think the optimization problem is not feasible due to your data.
>> Have you tried balancing the dataset as I mentioned in your other
>> question regarding the
>> ​​
>> MLPClassifier?
>>
>>
>>
> ​Hi Piotr,
>
> I had tried all the balancing algorithms in the link that you stated, but
> the only one that really offered some improvement was the SMOTE
> over-sampling of positive observations. The original dataset contained ​24
> positive and 1230 negative but after SMOTE I doubled the positive to 48.
> Reduction of the negative observations led to poor predictions, at least
> using random forests. I haven't tried it with
> ​
> MLPClassifier yet though.
>
>
>
>


-- 

==

Thomas Evangelidis

Research Specialist
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

  teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-08 Thread Thomas Evangelidis
>
>
> @Thomas
> I still think the optimization problem is not feasible due to your data.
> Have you tried balancing the dataset as I mentioned in your other question
> regarding the
> ​​
> MLPClassifier?
>
>
>
​Hi Piotr,

I had tried all the balancing algorithms in the link that you stated, but
the only one that really offered some improvement was the SMOTE
over-sampling of positive observations. The original dataset contained ​24
positive and 1230 negative but after SMOTE I doubled the positive to 48.
Reduction of the negative observations led to poor predictions, at least
using random forests. I haven't tried it with
​
MLPClassifier yet though.
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-08 Thread Piotr Bialecki
I thought the same about the correpondence between SVC and nuSVC.

Any ideas why lowering the value might also help?
http://stackoverflow.com/questions/35221433/error-in-using-non-linear-svm-in-scikit-learn

He apparently used a very low value for nu (0.01) and the error vanished.


Greets,
Piotr



On 08.12.2016 11:18, Michael Eickenberg wrote:
Ah, sorry, true. It is the error fraction instead of the number of errors.

In any case, try varying this quantity.

At one point I thought that nuSVC is just the constrained optimization version 
of the lagrange-style (penalized) normal SVC. That would mean that there is a 
correspondence between C for SVC and nu for nuSVC, leading to the conclusion 
that there must be nus that are feasible. So setting to nu=1. should always 
lead to feasibility. Now, looking at the docstring, since the nu controls two 
quantities at the same time, I am not entirely 1000% sure of this anymore, but 
I think it still holds.

Michael


On Thu, Dec 8, 2016 at 11:08 AM, Piotr Bialecki 
> wrote:
Hi Michael, hi Thomas,

I think the nu value is bound to (0, 1].
So the code will result in a ValueError (at least in sklearn 0.18).

@Thomas
I still think the optimization problem is not feasible due to your data.
Have you tried balancing the dataset as I mentioned in your other question 
regarding the MLPClassifier?


Greets,
Piotr






On 08.12.2016 10:57, Michael Eickenberg wrote:
You have to set a bigger \nu.
Try

nus =2 ** np.arange(-1, 10)  # starting at .5 (default), going to 512
for nu in nus:
clf = svm.NuSVC(nu=nu)
try:
clf.fit ...
except ValueError as e:
print("nu {} not feasible".format(nu))

At some point it should start working.

Hope that helps,
Michael




On Thu, Dec 8, 2016 at 10:49 AM, Thomas Evangelidis 
> wrote:
Hi Piotr,

the SVC performs quite well, slightly better than random forests on the same 
data. By training error do you mean this?

clf = svm.SVC(probability=True)
clf.fit(train_list_resampled3, train_activity_list_resampled3)
print "training error=", clf.score(train_list_resampled3, 
train_activity_list_resampled3)

If this is what you mean by "skip the sample_weights":
clf = svm.NuSVC(probability=True)
clf.fit(train_list_resampled3, train_activity_list_resampled3, 
sample_weight=None)

then no, it does not converge. After all "sample_weight=None" is the default 
value.

I am out of ideas about what may be the problem.

Thomas


On 8 December 2016 at 08:56, Piotr Bialecki 
> wrote:
Hi Thomas,

the doc says, that nu gives an upper bound on the fraction of training errors 
and a lower bound of the fractions
of support vectors.
http://scikit-learn.org/stable/modules/generated/sklearn.svm.NuSVC.html

Therefore, it acts as a hard bound on the allowed misclassification on your 
dataset.

To me it seems as if the error bound is not feasible.
How well did the SVC perform? What was your training error there?

Will the NuSVC converge when you skip the sample_weights?


Greets,
Piotr


On 08.12.2016 00:07, Thomas Evangelidis wrote:
Greetings,

I want  to  use the Nu-Support Vector Classifier with the following input data:

X= [
array([  3.90387012,   1.60732281,  -0.33315799,   4.02770896,
 1.82337731,  -0.74007214,   6.75989219,   3.68538903,
 ..
 0.,  11.64276776,   0.,   0.]),
array([  3.36856769e+00,   1.48705816e+00,   4.28566992e-01,
 3.35622071e+00,   1.64046508e+00,   5.66879661e-01,
 .
 4.25335335e+00,   1.96508829e+00,   8.63453394e-06]),
 array([  3.74986249e+00,   1.69060713e+00,  -5.09921270e-01,
 3.76320781e+00,   1.67664455e+00,  -6.21126735e-01,
 ..
 4.16700259e+00,   1.88688784e+00,   7.34729942e-06]),
...
]

and

Y=  [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0]


​Each array of X contains 60 numbers and the dataset consists of 48 positive 
and 1230 negative observations. When I train an svm.SVC() classifier I get 
quite good predictions, but wit the ​svm.NuSVC​() I keep getting the following 
error no matter which value of nu in [0.1, ..., 0.9, 0.99, 0.999, 0.] I try:
/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in fit(self, X, y, 
sample_weight)
187
188 seed = rnd.randint(np.iinfo('i').max)
--> 189 fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
190 # see comment on the other call to np.iinfo in this file
191

Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-08 Thread Piotr Bialecki
Hi Michael, hi Thomas,

I think the nu value is bound to (0, 1].
So the code will result in a ValueError (at least in sklearn 0.18).

@Thomas
I still think the optimization problem is not feasible due to your data.
Have you tried balancing the dataset as I mentioned in your other question 
regarding the MLPClassifier?


Greets,
Piotr





On 08.12.2016 10:57, Michael Eickenberg wrote:
You have to set a bigger \nu.
Try

nus =2 ** np.arange(-1, 10)  # starting at .5 (default), going to 512
for nu in nus:
clf = svm.NuSVC(nu=nu)
try:
clf.fit ...
except ValueError as e:
print("nu {} not feasible".format(nu))

At some point it should start working.

Hope that helps,
Michael




On Thu, Dec 8, 2016 at 10:49 AM, Thomas Evangelidis 
> wrote:
Hi Piotr,

the SVC performs quite well, slightly better than random forests on the same 
data. By training error do you mean this?

clf = svm.SVC(probability=True)
clf.fit(train_list_resampled3, train_activity_list_resampled3)
print "training error=", clf.score(train_list_resampled3, 
train_activity_list_resampled3)

If this is what you mean by "skip the sample_weights":
clf = svm.NuSVC(probability=True)
clf.fit(train_list_resampled3, train_activity_list_resampled3, 
sample_weight=None)

then no, it does not converge. After all "sample_weight=None" is the default 
value.

I am out of ideas about what may be the problem.

Thomas


On 8 December 2016 at 08:56, Piotr Bialecki 
<piotr.biale...@hotmail.de>
 wrote:
Hi Thomas,

the doc says, that nu gives an upper bound on the fraction of training errors 
and a lower bound of the fractions
of support vectors.
http://scikit-learn.org/stable/modules/generated/sklearn.svm.NuSVC.html

Therefore, it acts as a hard bound on the allowed misclassification on your 
dataset.

To me it seems as if the error bound is not feasible.
How well did the SVC perform? What was your training error there?

Will the NuSVC converge when you skip the sample_weights?


Greets,
Piotr


On 08.12.2016 00:07, Thomas Evangelidis wrote:
Greetings,

I want  to  use the Nu-Support Vector Classifier with the following input data:

X= [
array([  3.90387012,   1.60732281,  -0.33315799,   4.02770896,
 1.82337731,  -0.74007214,   6.75989219,   3.68538903,
 ..
 0.,  11.64276776,   0.,   0.]),
array([  3.36856769e+00,   1.48705816e+00,   4.28566992e-01,
 3.35622071e+00,   1.64046508e+00,   5.66879661e-01,
 .
 4.25335335e+00,   1.96508829e+00,   8.63453394e-06]),
 array([  3.74986249e+00,   1.69060713e+00,  -5.09921270e-01,
 3.76320781e+00,   1.67664455e+00,  -6.21126735e-01,
 ..
 4.16700259e+00,   1.88688784e+00,   7.34729942e-06]),
...
]

and

Y=  [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0]


​Each array of X contains 60 numbers and the dataset consists of 48 positive 
and 1230 negative observations. When I train an svm.SVC() classifier I get 
quite good predictions, but wit the ​svm.NuSVC​() I keep getting the following 
error no matter which value of nu in [0.1, ..., 0.9, 0.99, 0.999, 0.] I try:
/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in fit(self, X, y, 
sample_weight)
187
188 seed = rnd.randint(np.iinfo('i').max)
--> 189 fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
190 # see comment on the other call to np.iinfo in this file
191
/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in _dense_fit(self, 
X, y, sample_weight, solver_type, kernel, random_seed)
254 cache_size=self.cache_size, coef0=self.coef0,
255 gamma=self._gamma, epsilon=self.epsilon,
--> 256 max_iter=self.max_iter, random_seed=random_seed)
257
258 self._warn_from_fit_status()
/usr/local/lib/python2.7/dist-packages/sklearn/svm/libsvm.so in 
sklearn.svm.libsvm.fit (sklearn/svm/libsvm.c:2501)()
ValueError: specified nu is infeasible

​
​ Does anyone know what might be wrong? Could it be the input data?

thanks in advance for any advice
Thomas ​



--

==

Thomas Evangelidis

Research Specialist

CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic


email: tev...@pharm.uoa.gr

  teva...@gmail.com

website: 

Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-08 Thread Michael Eickenberg
You have to set a bigger \nu.
Try

nus =2 ** np.arange(-1, 10)  # starting at .5 (default), going to 512
for nu in nus:
clf = svm.NuSVC(nu=nu)
try:
clf.fit ...
except ValueError as e:
print("nu {} not feasible".format(nu))

At some point it should start working.

Hope that helps,
Michael




On Thu, Dec 8, 2016 at 10:49 AM, Thomas Evangelidis 
wrote:

> Hi Piotr,
>
> the SVC performs quite well, slightly better than random forests on the
> same data. By training error do you mean this?
>
> clf = svm.SVC(probability=True)
> clf.fit(train_list_resampled3, train_activity_list_resampled3)
> print "training error=", clf.score(train_list_resampled3,
> train_activity_list_resampled3)
>
> If this is what you mean by "skip the sample_weights":
> clf = svm.NuSVC(probability=True)
> clf.fit(train_list_resampled3, train_activity_list_resampled3,
> sample_weight=None)
>
> then no, it does not converge. After all "sample_weight=None" is the
> default value.
>
> I am out of ideas about what may be the problem.
>
> Thomas
>
>
> On 8 December 2016 at 08:56, Piotr Bialecki 
> wrote:
>
>> Hi Thomas,
>>
>> the doc says, that nu gives an upper bound on the fraction of training
>> errors and a lower bound of the fractions
>> of support vectors.
>> http://scikit-learn.org/stable/modules/generated/sklearn.svm.NuSVC.html
>>
>> Therefore, it acts as a hard bound on the allowed misclassification on
>> your dataset.
>>
>> To me it seems as if the error bound is not feasible.
>> How well did the SVC perform? What was your training error there?
>>
>> Will the NuSVC converge when you skip the sample_weights?
>>
>>
>> Greets,
>> Piotr
>>
>>
>> On 08.12.2016 00:07, Thomas Evangelidis wrote:
>>
>> Greetings,
>>
>> I want  to  use the Nu-Support Vector Classifier with the following input
>> data:
>>
>> X= [
>> array([  3.90387012,   1.60732281,  -0.33315799,   4.02770896,
>>  1.82337731,  -0.74007214,   6.75989219,   3.68538903,
>>  ..
>>  0.,  11.64276776,   0.,   0.]),
>> array([  3.36856769e+00,   1.48705816e+00,   4.28566992e-01,
>>  3.35622071e+00,   1.64046508e+00,   5.66879661e-01,
>>  .
>>  4.25335335e+00,   1.96508829e+00,   8.63453394e-06]),
>>  array([  3.74986249e+00,   1.69060713e+00,  -5.09921270e-01,
>>  3.76320781e+00,   1.67664455e+00,  -6.21126735e-01,
>>  ..
>>  4.16700259e+00,   1.88688784e+00,   7.34729942e-06]),
>> ...
>> ]
>>
>> and
>>
>> Y=  [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
>> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
>> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
>> 0, 0, 0, 
>> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
>> 0, 0, 0, 0, 0, 0, 0, 0]
>>
>>
>>> ​Each array of X contains 60 numbers and the dataset consists of 48
>>> positive and 1230 negative observations. When I train an svm.SVC()
>>> classifier I get quite good predictions, but wit the ​svm.NuSVC​() I keep
>>> getting the following error no matter which value of nu in [0.1, ..., 0.9,
>>> 0.99, 0.999, 0.] I try:
>>> /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in
>>> fit(self, X, y, sample_weight)
>>> 187
>>> 188 seed = rnd.randint(np.iinfo('i').max)
>>> --> 189 fit(X, y, sample_weight, solver_type, kernel,
>>> random_seed=seed)
>>> 190 # see comment on the other call to np.iinfo in this file
>>> 191
>>> /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in
>>> _dense_fit(self, X, y, sample_weight, solver_type, kernel, random_seed)
>>> 254 cache_size=self.cache_size, coef0=self.coef0,
>>> 255 gamma=self._gamma, epsilon=self.epsilon,
>>> --> 256 max_iter=self.max_iter, random_seed=random_seed)
>>> 257
>>> 258 self._warn_from_fit_status()
>>> /usr/local/lib/python2.7/dist-packages/sklearn/svm/libsvm.so in
>>> sklearn.svm.libsvm.fit (sklearn/svm/libsvm.c:2501)()
>>> ValueError: specified nu is infeasible
>>
>>
>> ​
>> ​Does anyone know what might be wrong? Could it be the input data?
>>
>> thanks in advance for any advice
>> Thomas​
>>
>>
>>
>> --
>>
>> ==
>>
>> Thomas Evangelidis
>>
>> Research Specialist
>> CEITEC - Central European Institute of Technology
>> Masaryk University
>> Kamenice 5/A35/1S081,
>> 62500 Brno, Czech Republic
>>
>> email: tev...@pharm.uoa.gr
>>
>>   teva...@gmail.com
>>
>>
>> website: https://sites.google.com/site/thomasevangelidishomepage/
>>
>>
>>
>> ___
>> scikit-learn mailing 
>> listscikit-learn@python.orghttps://mail.python.org/mailman/listinfo/scikit-learn
>>
>>
>>
>> 

Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-08 Thread Thomas Evangelidis
Hi Piotr,

the SVC performs quite well, slightly better than random forests on the
same data. By training error do you mean this?

clf = svm.SVC(probability=True)
clf.fit(train_list_resampled3, train_activity_list_resampled3)
print "training error=", clf.score(train_list_resampled3,
train_activity_list_resampled3)

If this is what you mean by "skip the sample_weights":
clf = svm.NuSVC(probability=True)
clf.fit(train_list_resampled3, train_activity_list_resampled3,
sample_weight=None)

then no, it does not converge. After all "sample_weight=None" is the
default value.

I am out of ideas about what may be the problem.

Thomas


On 8 December 2016 at 08:56, Piotr Bialecki 
wrote:

> Hi Thomas,
>
> the doc says, that nu gives an upper bound on the fraction of training
> errors and a lower bound of the fractions
> of support vectors.
> http://scikit-learn.org/stable/modules/generated/sklearn.svm.NuSVC.html
>
> Therefore, it acts as a hard bound on the allowed misclassification on
> your dataset.
>
> To me it seems as if the error bound is not feasible.
> How well did the SVC perform? What was your training error there?
>
> Will the NuSVC converge when you skip the sample_weights?
>
>
> Greets,
> Piotr
>
>
> On 08.12.2016 00:07, Thomas Evangelidis wrote:
>
> Greetings,
>
> I want  to  use the Nu-Support Vector Classifier with the following input
> data:
>
> X= [
> array([  3.90387012,   1.60732281,  -0.33315799,   4.02770896,
>  1.82337731,  -0.74007214,   6.75989219,   3.68538903,
>  ..
>  0.,  11.64276776,   0.,   0.]),
> array([  3.36856769e+00,   1.48705816e+00,   4.28566992e-01,
>  3.35622071e+00,   1.64046508e+00,   5.66879661e-01,
>  .
>  4.25335335e+00,   1.96508829e+00,   8.63453394e-06]),
>  array([  3.74986249e+00,   1.69060713e+00,  -5.09921270e-01,
>  3.76320781e+00,   1.67664455e+00,  -6.21126735e-01,
>  ..
>  4.16700259e+00,   1.88688784e+00,   7.34729942e-06]),
> ...
> ]
>
> and
>
> Y=  [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0]
>
>
>> ​Each array of X contains 60 numbers and the dataset consists of 48
>> positive and 1230 negative observations. When I train an svm.SVC()
>> classifier I get quite good predictions, but wit the ​svm.NuSVC​() I keep
>> getting the following error no matter which value of nu in [0.1, ..., 0.9,
>> 0.99, 0.999, 0.] I try:
>> /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in fit(self,
>> X, y, sample_weight)
>> 187
>> 188 seed = rnd.randint(np.iinfo('i').max)
>> --> 189 fit(X, y, sample_weight, solver_type, kernel,
>> random_seed=seed)
>> 190 # see comment on the other call to np.iinfo in this file
>> 191
>> /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in
>> _dense_fit(self, X, y, sample_weight, solver_type, kernel, random_seed)
>> 254 cache_size=self.cache_size, coef0=self.coef0,
>> 255 gamma=self._gamma, epsilon=self.epsilon,
>> --> 256 max_iter=self.max_iter, random_seed=random_seed)
>> 257
>> 258 self._warn_from_fit_status()
>> /usr/local/lib/python2.7/dist-packages/sklearn/svm/libsvm.so in
>> sklearn.svm.libsvm.fit (sklearn/svm/libsvm.c:2501)()
>> ValueError: specified nu is infeasible
>
>
> ​
> ​Does anyone know what might be wrong? Could it be the input data?
>
> thanks in advance for any advice
> Thomas​
>
>
>
> --
>
> ==
>
> Thomas Evangelidis
>
> Research Specialist
> CEITEC - Central European Institute of Technology
> Masaryk University
> Kamenice 5/A35/1S081,
> 62500 Brno, Czech Republic
>
> email: tev...@pharm.uoa.gr
>
>   teva...@gmail.com
>
>
> website: https://sites.google.com/site/thomasevangelidishomepage/
>
>
>
> ___
> scikit-learn mailing 
> listscikit-learn@python.orghttps://mail.python.org/mailman/listinfo/scikit-learn
>
>
>
> ___
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>


-- 

==

Thomas Evangelidis

Research Specialist
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

  teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
___

Re: [scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-07 Thread Piotr Bialecki
Hi Thomas,

the doc says, that nu gives an upper bound on the fraction of training errors 
and a lower bound of the fractions
of support vectors.
http://scikit-learn.org/stable/modules/generated/sklearn.svm.NuSVC.html

Therefore, it acts as a hard bound on the allowed misclassification on your 
dataset.

To me it seems as if the error bound is not feasible.
How well did the SVC perform? What was your training error there?

Will the NuSVC converge when you skip the sample_weights?


Greets,
Piotr

On 08.12.2016 00:07, Thomas Evangelidis wrote:
Greetings,

I want  to  use the Nu-Support Vector Classifier with the following input data:

X= [
array([  3.90387012,   1.60732281,  -0.33315799,   4.02770896,
 1.82337731,  -0.74007214,   6.75989219,   3.68538903,
 ..
 0.,  11.64276776,   0.,   0.]),
array([  3.36856769e+00,   1.48705816e+00,   4.28566992e-01,
 3.35622071e+00,   1.64046508e+00,   5.66879661e-01,
 .
 4.25335335e+00,   1.96508829e+00,   8.63453394e-06]),
 array([  3.74986249e+00,   1.69060713e+00,  -5.09921270e-01,
 3.76320781e+00,   1.67664455e+00,  -6.21126735e-01,
 ..
 4.16700259e+00,   1.88688784e+00,   7.34729942e-06]),
...
]

and

Y=  [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0]


​Each array of X contains 60 numbers and the dataset consists of 48 positive 
and 1230 negative observations. When I train an svm.SVC() classifier I get 
quite good predictions, but wit the ​svm.NuSVC​() I keep getting the following 
error no matter which value of nu in [0.1, ..., 0.9, 0.99, 0.999, 0.] I try:
/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in fit(self, X, y, 
sample_weight)
187
188 seed = rnd.randint(np.iinfo('i').max)
--> 189 fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
190 # see comment on the other call to np.iinfo in this file
191
/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in _dense_fit(self, 
X, y, sample_weight, solver_type, kernel, random_seed)
254 cache_size=self.cache_size, coef0=self.coef0,
255 gamma=self._gamma, epsilon=self.epsilon,
--> 256 max_iter=self.max_iter, random_seed=random_seed)
257
258 self._warn_from_fit_status()
/usr/local/lib/python2.7/dist-packages/sklearn/svm/libsvm.so in 
sklearn.svm.libsvm.fit (sklearn/svm/libsvm.c:2501)()
ValueError: specified nu is infeasible

​
​Does anyone know what might be wrong? Could it be the input data?

thanks in advance for any advice
Thomas​



--

==

Thomas Evangelidis

Research Specialist

CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic


email: tev...@pharm.uoa.gr

  teva...@gmail.com

website: https://sites.google.com/site/thomasevangelidishomepage/




___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


[scikit-learn] NuSVC and ValueError: specified nu is infeasible

2016-12-07 Thread Thomas Evangelidis
Greetings,

I want  to  use the Nu-Support Vector Classifier with the following input
data:

X= [
array([  3.90387012,   1.60732281,  -0.33315799,   4.02770896,
 1.82337731,  -0.74007214,   6.75989219,   3.68538903,
 ..
 0.,  11.64276776,   0.,   0.]),
array([  3.36856769e+00,   1.48705816e+00,   4.28566992e-01,
 3.35622071e+00,   1.64046508e+00,   5.66879661e-01,
 .
 4.25335335e+00,   1.96508829e+00,   8.63453394e-06]),
 array([  3.74986249e+00,   1.69060713e+00,  -5.09921270e-01,
 3.76320781e+00,   1.67664455e+00,  -6.21126735e-01,
 ..
 4.16700259e+00,   1.88688784e+00,   7.34729942e-06]),
...
]

and

Y=  [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0]


> ​Each array of X contains 60 numbers and the dataset consists of 48
> positive and 1230 negative observations. When I train an svm.SVC()
> classifier I get quite good predictions, but wit the ​svm.NuSVC​() I keep
> getting the following error no matter which value of nu in [0.1, ..., 0.9,
> 0.99, 0.999, 0.] I try:
> /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in fit(self,
> X, y, sample_weight)
> 187
> 188 seed = rnd.randint(np.iinfo('i').max)
> --> 189 fit(X, y, sample_weight, solver_type, kernel,
> random_seed=seed)
> 190 # see comment on the other call to np.iinfo in this file
> 191
> /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.pyc in
> _dense_fit(self, X, y, sample_weight, solver_type, kernel, random_seed)
> 254 cache_size=self.cache_size, coef0=self.coef0,
> 255 gamma=self._gamma, epsilon=self.epsilon,
> --> 256 max_iter=self.max_iter, random_seed=random_seed)
> 257
> 258 self._warn_from_fit_status()
> /usr/local/lib/python2.7/dist-packages/sklearn/svm/libsvm.so in
> sklearn.svm.libsvm.fit (sklearn/svm/libsvm.c:2501)()
> ValueError: specified nu is infeasible


​
​Does anyone know what might be wrong? Could it be the input data?

thanks in advance for any advice
Thomas​



-- 

==

Thomas Evangelidis

Research Specialist
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

  teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn