Hey!

Did anybody run that script and can confirm, that the results differ on
their machine aswell,
or maybe even have an idea why the results between the CrossValidation and
the best classifier of the GridSearch differ? Should I put this as an issue
on Github?

Best,
Tobias

On Fri, Jul 27, 2012 at 5:49 PM, Tobias Günther <[email protected]>wrote:

> Hi!
> I uploaded some code with the iris dataset as a gist:
>
> https://gist.github.com/3188762
>
> Output on my machine:
>
> GRID SEARCH:
> Best f1_score: 0.556
> Best parameters set:
>  alpha: 0.0001
> loss: 'log'
>  penalty: 'l1'
> seed: 0
>
> CROSS VALIDATION:
> Best f1_score: 0.52 (+/- 0.05)
>
>
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to