Andreas wrote:
> Sorry for being terse, I should be working.
Yes, me too. I think this is the last commit you'll see from me for a while.
> We really can not break backward compatibility.
> One possibility is to have the old ``grid_scores_`` be the same as
Backwards-compatible ``grid_scores_``
Here's what I got so far:
http://pastie.org/6464655
It's about 40% faster.
I still need to add the fixed vocabulary option and parallelize.
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynam
Hi Alex.
It should be fully connected. I'll check again.
Thanks.
Andy
On 03/12/2013 04:07 PM, Alexandre Gramfort wrote:
> hi Andy,
>
> is your graph fully connected? ie one connected compotent? if not
> you should tell the estimator.
>
> let me know if it works.
>
> Alex
>
> On Tue, Mar 12, 2013
hi Andy,
is your graph fully connected? ie one connected compotent? if not
you should tell the estimator.
let me know if it works.
Alex
On Tue, Mar 12, 2013 at 3:16 PM, Andreas Mueller
wrote:
> Hey everybody.
> I have been trying to use Ward with a fixed connectivity matrix today
> and ran int
Hey everybody.
I have been trying to use Ward with a fixed connectivity matrix today
and ran into some problems:
File "/home/VI/staff/amueller/checkout/scikit-learn/sklearn/base.py",
line 330, in fit_predict
self.fit(X)
File
"/home/VI/staff/amueller/checkout/scikit-learn/sklearn/clus
On 03/12/2013 02:05 PM, Joel Nothman wrote:
> Andreas wrote:
>
>> In the meantime, did you have a look at
>> https://github.com/scikit-learn/scikit-learn/pull/1742?
> No, I hadn't, but now I've merged that (not a trivial merge), and
> changed a couple of things a little.
>
> https://github.com/jnot
Andreas wrote:
> In the meantime, did you have a look at
> https://github.com/scikit-learn/scikit-learn/pull/1742?
No, I hadn't, but now I've merged that (not a trivial merge), and
changed a couple of things a little.
https://github.com/jnothman/scikit-learn/tree/grid_search_more_info (6e71aeaf8
2013/3/12 Raj Arasu :
> I am new to the "hashing trick" in general, but should I expect to get the
> same coefficient matrix when training a BernoulliNB model using a
> DictVectorizer versus a FeatureHasher as feature extractors? I am getting
> different coefficient matrixes.
No, you will most li
I am new to the "hashing trick" in general, but should I expect to get the
same coefficient matrix when training a BernoulliNB model using a
DictVectorizer versus a FeatureHasher as feature extractors? I am getting
different coefficient matrixes.
---
Hi Noel.
Thanks for your input.
Thinking about per-fold and per-parameter values is definitely a good idea.
I didn't have time to go through your proposal in detail, will try to do
asap.
In the meantime, did you have a look at
https://github.com/scikit-learn/scikit-learn/pull/1742?
Cheers,
Andy
10 matches
Mail list logo