>
> On 03/10/2013 16:42:44 +0100, Andreas Mueller wrote:
>
> If you have an elegant solution, I'm all ears, though ;)
>
Here's a hacky solution for my particular case which requires git revert
2d9cb81b8 to work at HEAD. It works by returning a Score object from the
Scorer, which pretends it is the
On 03/10/2013 09:09 PM, Lars Buitinck wrote:
> 2013/3/10 Andreas Mueller :
>> One other open task is adding good "see also" sections in the documentation
>> and generally improve documentation consistency and quality.
> Also not mentioned in the issue tracker (I think) is that we want to
> support
2013/3/10 Andreas Mueller :
> One other open task is adding good "see also" sections in the documentation
> and generally improve documentation consistency and quality.
Also not mentioned in the issue tracker (I think) is that we want to
support Python 3. There is partial support for that (mostly
Hi Chinmay.
You could work on improving test coverage or any of the issues labeled easy:
https://github.com/scikit-learn/scikit-learn/issues?labels=Easy
In particular you could have a look at
https://github.com/scikit-learn/scikit-learn/issues/1678
Other than that I don't see an issue that is part
Hi,
I have read the PSF minimum requirements for prospective students who wish
to participate in gsoc-13.
I am a newbie to sklearn and machine learning.
Are there any particular easy fix issues i could handle? Do i first need to
be familiar with them and then try to handle issues?
Thanks in advanc
On 03/10/2013 04:42 PM, Andreas Mueller wrote:
> What you can always do is use IterGrid and iterate over the parameters
> yourself and store whatever information you'd like.
>
As an aside: if you had all fitted estimators, it would also be quite
easy to compute the other scores, right?
Would that
On 03/10/2013 02:04 PM, Joel Nothman wrote:
>
>
> The only output of the ``scoring`` parameter that is currently stored
> in GridSearchCV.cv_scores_ is the score. But the score is also
> aggregated (by BaseSearchCV to produce mean_validation_score), which
> requires that it can be added to 0, ac
Thanks Andy,
On 03/10/2013 11:01, Andreas Mueller wrote:
Yes, we want to find the model with the highest F1 score, and so use it for
the grid search. But it is hard to interpret an F1 score alone, because it
is by definition a compromise between precision and recall. The latter are
more informati
On 03/10/2013 12:01 PM, Joel Nothman wrote:
> Firstly thank you to the devs for a great toolkit.
>
> I am using sklearn's GridSearchCV for a classification with the F1
> metric. GridSearchCV.fit() produces a .cv_scores_ attribute which
> allows me to view the scores for each fold for each point i
Firstly thank you to the devs for a great toolkit.
I am using sklearn's GridSearchCV for a classification with the F1 metric.
GridSearchCV.fit() produces a .cv_scores_ attribute which allows me to view
the scores for each fold for each point in the grid. But it does not let me
view the precision a
10 matches
Mail list logo