Dear All,
I'm stuck with a problem and I don't know if it's a bug. I'm defining the 
optimization parameter C and gamma for my SVM in this way:
C = 10.0 ** numpy.arange(-3, 9)gamma = 10.0 ** numpy.arange(-6, 4)param_grid = 
dict(gamma=gamma, C=C)svr = svm.SVC(kernel='rbf')clfopt = 
grid_search.GridSearchCV(svr,param_grid)clfopt.fit(X_train, Y_train)
Changing the dataset I always get the same C and gamma that are 0.001 and 
1e-06. With that I get a worst result!!!If I set manually to different C and 
gamma I get a better result! 
With this optimization I get always the same result also for different subset 
of training of my dataset!
I'm completely lost and I don't know if probably this is due to the training 
set size. I'm saying this because if I reduce my dataset, this optimization 
works. So I was thinking that the cause could be the big dataset.
Maybe is a bug that with big dataset it give these strange results of C and 
Gamma (that are the lower limit that I set for the two to be researched).
Do you know if there is another way to find a best C and Gamma without using 
the grid_search_GridSearch ?
If you think strange I will segnalate as bug. 
Thanks All!!!                                     
------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to