Hi Andy,
Yes here is the full code in which I am having a training dataset (x_data)
and an independent test dataset(test_x_data).
Mose importantly, I found few such value in iris data too.
#same Scaling on both test and train data (centering the data scaling)
scaler =
It didn't work Andy, even after that...
I removed refitting the data, but didn't set random_state explicitly. The
same problem persist. Look at these few examples:
Y_true Y_predict Class0_prob. Class1_prob.
1 0 0.28 0.72
hi,
proba calibration with libsvm (using Platt's method) involves data resampling.
So between runs the result can change.
HTH
Alex
On Thu, Jun 26, 2014 at 12:51 AM, Stelios chefa...@gmail.com wrote:
Hello all,
I have the following code:
. . . .
# 'train' is a (M,N) numpy array (input)
These are not different runs, though
Maybe the calibration is not used for prediction? That would be a bit
odd, though...
On 06/26/2014 09:07 AM, Alexandre Gramfort wrote:
hi,
proba calibration with libsvm (using Platt's method) involves data resampling.
So between runs the result can
2014-06-26 9:15 GMT+02:00 Andy t3k...@gmail.com:
Maybe the calibration is not used for prediction? That would be a bit
odd, though...
That's exactly what's going on. Prediction is consistent with
decision_function, but not predict_proba.
Hello all,
I have the following code:
. . . .
# 'train' is a (M,N) numpy array (input) and 'traint' is a (M,) numpy array
(target/label)
clf = SVC(kernel=rbf, C=1.74, gamma=0.0023, probability=True)
clf.fit(train, traint)
print clf.classes_# Ensure our classes are [0,1]
t1 =