It finally  works with nu=0.01 or less and the predictions are good. Is
there a problem with that?

On 8 December 2016 at 12:57, Thomas Evangelidis <[email protected]> wrote:

>
>
>>
>> @Thomas
>> I still think the optimization problem is not feasible due to your data.
>> Have you tried balancing the dataset as I mentioned in your other
>> question regarding the
>> ​​
>> MLPClassifier?
>>
>>
>>
> ​Hi Piotr,
>
> I had tried all the balancing algorithms in the link that you stated, but
> the only one that really offered some improvement was the SMOTE
> over-sampling of positive observations. The original dataset contained ​24
> positive and 1230 negative but after SMOTE I doubled the positive to 48.
> Reduction of the negative observations led to poor predictions, at least
> using random forests. I haven't tried it with
> ​
> MLPClassifier yet though.
>
>
>
>


-- 

======================================================================

Thomas Evangelidis

Research Specialist
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic

email: [email protected]

          [email protected]


website: https://sites.google.com/site/thomasevangelidishomepage/
_______________________________________________
scikit-learn mailing list
[email protected]
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to