Hi Sebastian,

Thank You for your answer,

what I mean is that by using a cross validation test I get 100% accuracy (on 
the testing set, not on the training set).

It seemed to me a too good result, thus I changed the y labels (I mean, I 
replaced the true labels with false ones)  to check that, as aspected, the 
accuracy would decrease.

It happened accuracy decreased, but never get values lower than 60%. 

Isn't it strange a bit?? Should not accuracy get values lower than 50% as 
higher than 50% by chance?

as I replied to Andras, I didn't perform an exhaustive test in replacing the 
true y labels with false ones, I only manually performed some tests (always 
retaining the 8 labels =1 and 8 labels = 0 as in the true set). when 8,8 
belance was changed (i.e. 6,10  or 4,12) the performances decreased as aspected 
to something as 50% (chance)

thank you for your suggestions, 

Fabrizio
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to