On Fri, 25 Nov 2016 at 20:24 Roman Yurchak wrote:
> On 24/11/16 09:00, Jaidev Deshpande wrote:
> >
> > well, `param_grid` in GridSearchCV can also be a list of
> dictionaries,
> > so you could directly specify the cases you are interested in
> (inste
ctually now that I think of it, I don't know if it will be necessarily
simpler. What if I have a massive grid and only few exceptions? Enumerating
the complement of that small subset would be much more expensive than
specifying the exceptions.
What do you think?
>
> On 23/11/16 11:15, J
correct.
>
> Should be
>
>
> [{'learning_rate': ['constant', 'invscaling', 'adaptive'], 'solver':
> ['sgd']}, {'solver': ['adam']}]
>
> (Note all values of dicts are lists)
>
Ah, thanks!
(J
x27;sgd',]},
> {'solver': ['adam',]}])
>
> DataFrame(gs.fit(X, y).cv_results_)
> ```
>
> Would give
>
> [image: image.png]
>
> HTH :)
>
Haha, this is perfect. I didn't know you could pass a list of dicts
Hi,
Sometimes when using GridSearchCV, I realize that in the grid there are
certain combinations of hyperparameters that are either incompatible or
redundant. For example, when using an MLP, if I specify the following grid:
grid = {'solver': ['sgd', 'adam'], 'learning_rate': ['constant',
'invscal
On Mon, 4 Jul 2016 at 15:33 Tom DLT wrote:
> note2:
>
> The LogisticRegression and Ridge(solver='sag') code do fit the intercept
> without breaking sparsity.
>
> For other solvers in Ridge, in the case of a sparse X input, the solver
> will automatically be changed to 'sag' and raise a warning.
>
Hi,
I usually encounter many cases when I've forgotten that my input to the
`AnyEstimator.fit` method is a sparse matrix, and I've set
`fit_intercept=False`.
To avoid this, I could of course make a habit of not tampering with the
default `fit_intercept=True`, but I think it would be better and mo