Thank you! That helped me a lot!!!
On 5 August 2015 at 11:23, Artem wrote:
> for i in range(len(predicted)):
>> auc.append(predicted[i][0])
>
>
> This is the source of the error. predict_proba returns a matrix (numpy
> array, to be precise) of shape (n_samples, n_classes). Obviou
>
> for i in range(len(predicted)):
> auc.append(predicted[i][0])
This is the source of the error. predict_proba returns a matrix (numpy
array, to be precise) of shape (n_samples, n_classes). Obviously, in your
case n_classes = 2.
A cell at a given row and column is the probabili
and here is an example output ( dont wonder, the test data is just an
subsample of the training set, that why there are everywhere 1.0 ).
AUC = 0.0
precisionrecall f1-score support
0.0 1.00 1.00 1.00 2
1.0 1.00 1.00 1.00
Maybe i didn't explained it very well sorry.
I just have 1 column as a target. The last "post" i did, was just a
converting from all 0's to 1's and all 1's to 0's. But the auc and the
expected are from the same date which is converted. So actually it should
be
auc is [0.952710670069, 0.018904
You should select the other column from predict_proba for auc.
On 08/04/2015 10:54 AM, Herbert Schulz wrote:
Thanks for the answer!
hmm its possible, I just make a little example:
auc is [0.952710670069, 0.01890450385597026,
0.0059624156214325846, 0.05391726570661811]
expected is [0.0,
Thanks for the answer!
hmm its possible, I just make a little example:
auc is [0.952710670069, 0.01890450385597026, 0.0059624156214325846,
0.05391726570661811]
expected is [0.0, 1.0, 1.0, 1.0]
but this is already with changed values, in the test set i set every value
0->1 and 1 to 0.
SO th
Hi Herbert
The worst value for AUC is 0.5 actually. Having values close to 0 means
than you can get a value as close to 1 by just changing your predictions
(predict class 1 when you think it's 0 and vice versa). Are you sure you
didn't confuse classes somewhere along the lines? (You might have cho