Well, it returns the equivalent of
lambda estimator, X, y: estimator.score(X, y)
On 5 January 2017 at 08:47, Jonathan Taylor
wrote:
> (Think this is right reply to from a digest... If not, apologies)
>
> Thanks for the pointers. From what I read on the API, I gather that for an
> estimator wi
(Think this is right reply to from a digest... If not, apologies)
Thanks for the pointers. From what I read on the API, I gather that for an
estimator with a score method, inside GridSearchCV there will be
pseudo-code like
...
estimator.fit(X_train, y_train)
scorer = estimator.score
> I've been trying to understand how to use sklearn for this as there is
> no need for me to rewrite the basic CV functions. I'd like to be able
> to use my own custom estimator (so I guess I just need a subclass of
> BaseEstimator with a `fit` method with (X,y) signature?), as well as my
> own mod
You can indeed derive from BaseEstimator and implement fit, predict
and optionally score.
Here is the documentation for the expected estimator API:
http://scikit-learn.org/stable/developers/contributing.html#apis-of-scikit-learn-objects
As this is a linear regression model, you can also want to
I'm looking for a simple way to get a small pipeline for choosing a
parameter using a modification of CV for regression type problems.
The modification is pretty simple, so, for squared-error or logistic
deviance, it is a simple modification of the score of `Y` (binary labels)
and `X.dot(beta)` (l