Just in case this does appear twice, I am sending this for the second time as I
have not seen it appear on the website archives, neither has it featured in the
latest mail that I have received from this mailing list.
This is really following on from the recent problems that I have been having
t
ROC AUC doesn't use binary predictions as its input; it uses the measure of
confidence (or "decision function") that each sample should be assigned 1.
cross_val_score is correctly using decision_function to get these
continuous values, and you should find its results replicated by using
roc_auc_sco
Dear Joel,
Thank you for your reply, I used the decision_function and it did replicate.
But I was wondering if someone could help me with this further. For this
particular dataset and with these parameters (C=1, kernel='rbf'), the
classifier is always outputting 0 for every sample. It is even d
Hi Andy,
*This is actually the second time that I sending this reply as I don't think
that it got to the mailing list, and I cannot see it in the archives.*
Yes, I’m back to receiving the emails now. I noticed that in the previous
thread, there was some message about the website being down? But
On 01/19/2015 10:43 AM, Timothy Vivian-Griffiths wrote:
> I have used this same dataset and parameters in Rs implementation of an SVM,
> and it is not outputting all 0s, so I don't think that it's a particular
> problem with the data. .
This seems odd. What implementation are you using in R?
Scik
Hi Tim.
I think it is highly unlikely that splits are not reproducible, and that
this is caused by parallelization.
If you run ``cross_val_score`` twice with the same seed, the outcomes
are exactly the same, right?
How are you trying and failing to reproduce the scores?
Best,
Andy
On 01/19/20